- Problem addressed by the paper
Privacy disclosure detection system that incorporate user input in its detection coverage.
- Solution proposed in the paper. Why is it better than previous work?
Previous disclosure detection systems are API based which usually protected by access permission requirement. This paper suggests new method to cover user input which is not protected by access permission. Because user input might reveal sensitive data.
- The major results.
By evaluating this approach on 200 randomly selected popular apps on Google Play, UI-picker is able to accurately label sensitive user inputs with 93.6% precision and 90.1% recall.
B. Basic idea and approach. How does the solution work?
Similar to SUPOR, UI-Picker first looks at an app’s layout files. However, unlike SUPOR, UI-Picker uses sibling elements in the layout files as the description text for a UI widget. Then in pre-processing, UI-Picker extracts the selected layout resource texts and reorganizes them using natural language processing. Then it uses supervised learning to train a classifier to identify user input privacy element. Then it uses behavior based result filtering to only include input field that requires user consent. This means this input API is called under user-triggered system callback. It is implemented by extending FlowDroid and MalloDroid.
- Its detection coverage is broader than previous works.
- Sensitive data should never be sent in plain text. UI-Picker integrates MalloDroid to automatically check SSL security risks by evaluating SSL usage in apps.
- It will not detect apps that use non-standard UI layout. This might be done by big developers who want their users to have similar experience in their multi-platform apps.
- It will not detect apps which are developed by non-English speakers. They might name the functions and variables inside their apps code in non-English language.
- This paper is less useful for end users since it only detects all privacy disclosures. It does not distinguish legitimate privacy disclosures from suspicious privacy disclosures like other paper: Checking More and Alerting Less: Detecting Privacy Leakages via Enhanced Data-flow Analysis and Peer Voting (NDSS’15). Some privacy disclosures might be legitimate to support users’ tasks.
- This system is only useful for Apps Marketplace provider such as Google Play. It is less useful for end users because the deployment for general users would not be easy.
E. Future work, Open issues, possible improvements
- It should be developed further to distinguish legitimate privacy disclosures from suspicious privacy leakages. It could incorporate peer voting mechanism from AAPL (Checking More Alerting Less, NDSS’15) or other method.
- False positives and false negatives could still be minimized by adjusting the system after learning several things that cause false positives and false negatives.
- It could also be used to analyze further which apps need sensitive data legitimately. However, these apps do not protect these sensitive data with encryption. Therefore, these data could be intercepted by an attacker on the way.