# Google maps - Hall of Fame & Hall of Shame .left-column[
Do this now: [poll](https://edstem.org/us/courses/21053/lessons/31195/slides/180617) ] .right-column[ ![:img google maps,100%,width](img/ml/google.jpg) ] .footnote[Picture from [Machine Learning on your Phone](https://www.appypie.com/top-machine-learning-mobile-apps)] ??? fame/shame neighborhood traffic... shorter commutes... --- name: inverse layout: true class: center, middle, inverse --- # Sensing and Machine Learning Lauren Bricker CSE 340 Spring 2022 .footnote[Slides credit: Jason Hong, Carnegie Mellon University [Are my Devices Spying on Me? Living in a World of Ubiquitous Computing](https://www.slideshare.net/jas0nh0ng/are-my-devices-spying-on-me-living-in-a-world-of-ubiquitous-computing); ] --- layout: false [//]: # (Outline Slide) # Today's Agenda **Do this now**: Answer the [poll](https://edstem.org/us/courses/21053/lessons/31195/slides/180617) in Ed - Administrivia - Undo code and video due Fri 20-May, 10pm - Undo reflection due Sun 22-May, 10pm - Final project info out Fri 20-May - Learning goals - Discuss useful applications of sensing - Investigate Sensing and Location basics - Define different types of Context Aware Apps - Briefly define Machine learning (ML) and how it figures into context-aware apps - Consider the ethical and security implications of context-aware apps --- # Smartphones .left-column[ [Polls](https://edstem.org/us/courses/21053/lessons/26122/slides/152226): How many of you - sleep with your phone? - check your phone first thing in the morning? - use your phone in the bathroom? ] .right-column[ ![:img Millenial with phone in bed, 100%,width](img/ml/phone-bed.jpg) ] --- # Smartphones Fun Facts about Millennials .left-column[ ![:fa thumbs-down] 83% sleep with phones ] .right-column[ ![:img Millenial with phone in bed, 100%,width](img/ml/phone-bed.jpg) ] --- count: false # Smartphones Fun Facts about Millennials .left-column[ ![:fa thumbs-down] 83% sleep with phones ![:fa thumbs-down] 90% check first thing in morning ] .right-column[ ![:img Millenial with phone in bed, 100%,width](img/ml/phone-bed.jpg) ] --- count: false # Smartphones Fun Facts about Millennials .left-column[ ![:fa thumbs-down] 83% sleep with phones ![:fa thumbs-down] 90% check first thing in morning ![:fa thumbs-down] 1 in 3 use in bathroom ] .right-column[ ![:img Millenial with phone in bed, 100%,width](img/ml/phone-bed.jpg) ] --- # Smartphone Data is Intimate ![:img Picture of smart phone screens with phone numbers; map; and sensor data, 60%,width](img/ml/personal.png) | Who we know | Sensors | Where we go | |-----------------------|-----------------------|---------------| | (contacts + call log) | (accel, sound, light) | (gps, photos) | --- # Example: COVID-19 Contact Tracing .left-column-half[ ![:img Picture of a COVID-19 contact tracing app showing motivational text like Join the fight and Get motivated if you come into contact with COVID-19, 100%, width](img/ml/contact-tracing.jpg) ] .right-column-half[ - Install an app on your phone - Turn on bluetooth - Keep track of every bluetooth ID you see - Sends bluetooth ID (securely). - Will only notify people if you report you contracted COVID - That's when others are notified (securely and privately) that they were in the vacinity of someone who contracted COVID ] --- # Other Useful Applications of Sensing ![:img Picture of LeafSnap app, 60%,width](img/ml/leafsnap.jpg) .footnote[[LeafSnap](http://leafsnap.com/) uses computer vision to identify trees by their leaves] --- # Other Useful Applications of Sensing ![:img Picture of Aipoly app, 60%,width](img/ml/aipoly.jpg) .footnote[[Vision AI](https://www.aipoly.com/) uses computer vision to identify images for the Blind and Visually Impaired] --- # Other Useful Applications of Sensing ![:img Picture of Carat app, 60%,width](img/ml/Carat.jpg) .footnote[[Carat: Collaborative Energy Diagnosis](http://carat.cs.helsinki.fi/) uses machine learning to save battery life] --- # Other Useful Applications of Sensing ![:img Picture of Imprompdo app, 50%,width](img/ml/imprompdo.jpg) .footnote[[Imprompdo](http://imprompdo.webflow.io/ ) uses machine learning to recommend activities to do, both fund and todos] --- # How do these systems work? .left-column50[ ## Old style of app design
graph TD I(Input) --Explicit Interaction--> A(Application) A --> Act(Action) classDef normal fill:#e6f3ff,stroke:#333,stroke-width:2px; class U,C,A,I,S,E,Act,Act2 normal
] -- count: false .right-column50[ ## New style of app design
graph TD U(User) --Implicit Sensing--> C(Context-Aware Application) S(System) --Implicit Sensing--> C E(Environment) --Implicit Sensing--> C C --> Act2(Action) classDef normal fill:#e6f3ff,stroke:#333,stroke-width:2px; class U,C,A,I,S,E,Act,Act2 normal
] --- .left-column-half[ ## Types of Sensors | | | | |--|--|--| | Clicks | Key presses | Touch | | Microphone | Camera | IOT devices | |Accelerometer | Rotation | Screen| |Applications | Location | Telephony| |Battery | Magnetometer | Temperature| |Bluetooth | Network Usage | Traffic| |Calls | Orientation | WiFi| |Messaging | Pressure | Processor| |Gravity | Proximity | Humidity | |Gyroscope | Light | Multi-touch | | ... | ... | ....| ] .right-column-half[
![:img Picture of different sensors, 70%, width](img/ml/android-sensors.png) - Which of these are Event Based? - Which of these are "Sampled"? ] ??? --- .left-column-half[ ## Types of Sensors | | | | |--|--|--| | Clicks | Key presses | Touch | | Microphone | Camera | IOT devices | |Accelerometer | Rotation | Screen| |Applications | Location | Telephony| |Battery | Magnetometer | Temperature| |Bluetooth | Network Usage | Traffic| |Calls | Orientation | WiFi| |Messaging | Pressure | Processor| |Gravity | Proximity | Humidity | |Gyroscope | Light | Multi-touch | | ... | ... | ....| ] .right-column-half[ ## Contact Tracing Other than Bluetooth, what sensors might be useful for contact tracing? Why? ] --- # Sensing: Categories of Sensors * Motion Sensors * Measure acceleration forces and rotational forces along three axes * Includes accelerometers, gravity sensors, gyroscopes, and rotational vector sensor * Accelerometers and gyroscope are generally HW based, Gravity, linear acceleration, rotation vector, significant motion, step counter, and step detector may be HW or SW based * Environmental Sensors * Measures relative ambient humidity, illuminance, ambient pressure, and ambient temperature * All four sensors are HW based * Position Sensors * Determine the physical position of the device. * Includes orientation, magnetometers, and proximity sensors * Geomagnetic field sensor and proximity sensor are HW based --- # Location - Locations are generally reported as Latitude and Longitude coordinates. - Generally reported using “Global Position System” (GPS) - Can also use cell tower or WiFi localization. - Three ways to do this - [LocationManager](https://stuff.mit.edu/afs/sipb/project/android/docs/training/basics/location/locationmanager.html) - [Fused Location Provider API](https://developers.google.com/location-context/fused-location-provider) - Need to set up [Google Play Services](https://developer.android.com/google/play-services/setup) on the device (including emulators) - Less power consumption - [Google Awareness API](https://developers.google.com/awareness) - Required Google credits ($$$) --- # Location: details - Specify app permissions - Do we need to get the locations in the foreground or background? - Do we want a precise or approximate location? - How do we want to get the data - At a particular time (once) - At regular intervals? - When a certain event occurs, like someone goes out of a boundary. ??? --- # Snapshots vs Fences Listeners are used to receive sensor or location updates at different intervals - This is like taking a snapshot of the sensor or location at a period of time. Fences (or Geofences) are a way to create an area of interest around a specific location. Listener is called any time a condition is true. --- # Google Awareness API in Android - A way to have your app react to the current situation - Seven signals (time, location, places, activity (like walking), beacons, headphones, and weather) - Allows for: - Ease of implementation - Better context data (raw signals are processed for quality) - Optimal system health (less impact on battery life) - Includes two APIs - [Fence API](https://developers.google.com/awareness/android-api/fence-api-overview) - [Snapshot API](https://developers.google.com/awareness/android-api/snapshot-api-overview) --- # Snapshot with Google Awareness API Seting up the callback (just like callbacks for other events) ``` java Awareness.getSnapshotClient(this).getDetectedActivity() .addOnSuccessListener(new OnSuccessListener
() { @Override public void onSuccess(DetectedActivityResponse dar) { ActivityRecognitionResult arr = dar.getActivityRecognitionResult(); } }) ``` --- # Fences with Google Awareness API Notify you *every time* a condition is true ```java // Create the primitive fences. AwarenessFence walkingFence = DetectedActivityFence.during(DetectedActivityFence.WALKING); ``` Use the `FenceClient` to register a a fence. Requires a `FenceUpdateRequest`, then use `FenceClient.updateFences()` Need to call`addFence()` for each fence to add. --- # Contact tracing Question (answer on [Ed](https://edstem.org/us/courses/21053/lessons/31195/slides/180619)): Would you use Snapshots vs Fences for the contact tracing app? -- count: false Answer : Fence We want to be notified about *every* contact so we can record it --- # Context Aware Apps .left-column-half[ ![:img Picture of a COVID-19 contact tracing app showing motivational text like Join the fight and Get motivated if you come into contact with COVID-19, 90%, width](img/ml/contact-tracing.jpg) ] .right-column-half[ This is a **Context-Aware** app - The app is using on board sensors (not user initiated input) to capture data - The app "learns" information about the user ] --- # Types of Context-Aware apps .left-column[ ![:img Picture of a mobile phone with a text message on screen containing a transcription of recent audio, 100%, width](img/ml/scribe4me.jpeg) ] .right-column[ **Capture and Access** - .red[Food diarying] and nutritional awareness via receipt analysis [Ubicomp 2002] - .bold.red[Audio Accessibility] for deaf people by supporting mobile sound transcription [Ubicomp 2006, CHI 2007] - .red[Citizen Science] volunteer data collection in the field [CSCW 2013, CHI 2015] - .red[Air quality assessment] and visualization [CHI 2013] - .red[Coordinating between patients and doctors] via wearable sensing of in-home physical therapy [CHI 2014] What Capture and Access apps can you think of? ] --- # Types of Context-aware apps .left-column[ ![:img Picture of a mobile phone with an unlock gesture that also labels emails for keeping or discarding, 100%, width](img/ml/proactive.png) ] .right-column[ Capture and Access
**Adaptive Services (changing operation or timing)** - .red[Adaptive Text Prediction] for assistive communication devices [TOCHI 2005] - .red[Location prediction] based on prior behavior [Ubicomp 2014] - .bold.red[Pro-active task access] on lock screen based on predicted user interest [MobileHCI 2014] What Adaptive Services apps can you think of? ] --- # Types of Context-aware apps .left-column[ ![:img example interaction above and on the surface of a phone supported by adding a depth camera to the front of the phone -- shows interacting with text, 80%, width](img/ml/airtouch.jpg) ![:img example interaction above and on the surface of a phone supported by adding a depth camera to the front of the phone -- shows interacting with an image, 80%,width](img/ml/airtouch2.jpg) ] .right-column[ Capture and Access
Adaptive Services (changing operation or timing)
**Novel Interaction** - .red[Cord Input] for interacting with mobile devices [CHI 2010] - .red[Smart Watch Intent to Interact] via twist'n'knock gesture [GI 2016] - .red[VR Intent to Interact] vi sensing body pose, gaze and gesture [CHI 2017] - .red[Around Body interaction] through gestures with the phone [Mobile HCI 2014] - .red.bold[Around phone interaction] through gestures combining on and above phone surface [UIST 2014] ] --- # Types of Context-aware apps .left-column[ ![:img example interaction above and on the surface of a phone supported by adding a depth camera to the front of the phone -- shows interacting with text, 80%, width](img/ml/airtouch.jpg) ![:img example interaction above and on the surface of a phone supported by adding a depth camera to the front of the phone -- shows interacting with an image, 80%,width](img/ml/airtouch2.jpg) ] .right-column[ ![:youtube Interweaving touch and in-air gestures using in-air gestures to segment touch gestures, H5niZW6ZhTk] What Novel Interaction apps can you think of? ] --- # Types of Context-aware apps .left-column[ ![:img Picture of an interface for simulating driving behavior providing feedback to the user about aggressive driving, 100%, width](img/ml/driving.png) ] .right-column[ Capture and Access
Adaptive Services (changing operation or timing)
Novel Interaction
**Behavioral Imaging** - .red[Detecting and Generating Safe Driving Behavior] by using inverse reinforcement learning to create human routine models [CHI 2016, 2017] - .red[Detecting Deviations in Family Routines] such as being late to pick up kids [CHI 2016] What Behavioral Imaging apps can you think of? ] --- # Contact tracing Question (answer on [Ed](https://edstem.org/us/courses/21053/lessons/26122/slides/152227)): What context aware classification would you give the the contact tracing app? - Capture and Access - Adaptive Services - Novel Interaction - Behavioral Imaging -- count: false Answer : Capture and Access We are capturing information about who you've been around, and indirectly allowing someone else to access that information. --- # Implementing Sensing: Using Data .left-column50[ ![:fa bed, fa-7x] ] .right-column50[ ## In class exercise How might you recognize sleep? - What recognition question - What sensors ] ??? (sleep quality? length?...) How to interpret sensors? --- # Implementing Sensing: Using Data .left-column50[ ![:img Sleep trace for accelerometer and sound, 80%,width](img/ml/sleep.png) ] .right-column50[ ## In class exercise - What recognition question (sleep quality? length?...) - What sensors - How to interpret sensors? ] --- # How do we program this? Write down some rules Implement them --- # How do we program this? Old Approach: Create software by hand - Use libraries (like JQuery) and frameworks - Create content, do layout, code up functionality - Deterministic (code does what you tell it to) New Approach: Collect data and train algorithms - Will still do the above, but will also have some functionality based on ML - *Collect lots of examples and train a ML algorithm* - *Statistical way of thinking* --- # This is *Machine Learning* Machine Learning is often used to process sensor data - Machine learning is one area of Artificial Intelligence - This is the kind that’s been getting lots of press The goal of machine learning is to develop systems that can improve performance with more experience - Can use "example data" as "experience" - Uses these examples to discern patterns - And to make predictions --- # Two main approaches ![:fa eye] *Supervised learning* (we have lots of examples of what should be predicted) ![:fa eye-slash] *Unsupervised learning* (e.g. clustering into groups and inferring what they are about) ![:fa low-vision] Can combine these (semi-supervised) ![:fa history] Can learn over time or train up front --- # How Machine Learning is Typically Used Step 1: Gather lots of data (easy on a phone!) -- count: false Step 2: Figure out useful features - Convert data to information (not knowledge!) - (typically) Collect labels --- count: false # How Machine Learning is Typically Used Step 1: Gather lots of data (easy on a phone!) Step 2: Figure out useful features Step 3: Select and train the ML algorithm to make a prediction - Lots of toolkits for this - Lots of algorithms to choose from - Mostly treat as a "black box" --- # Example: Decision tree for predicting premature birth ![:img decision tree, 50%,width](img/ml/decisiontree.png) --- # Example: Deep Learning for Image Captioning ![:img Captioning Images. Note the errors, 40%,width](img/ml/captioning.png) .footnote[[Captioning images. Note the errors.](http://cs.stanford.edu/people/karpathy/deepimagesent/) Deep learning now [available on your phone!](https://www.tensorflow.org/lite)] ??? Note differences between these: one label vs many --- # Training process ![:img ML Training Process, 60%,width](img/ml/training.png) --- # How Machine Learning is Typically Used Step 1: Gather lots of data (easy on a phone!) Step 2: Figure out useful features Step 3: Select and train the ML algorithm Step 4: Evaluate metrics (and iterate) ??? See how well algorithm does using several metrics Error analysis: what went wrong and why Iterate: get new data, make new features --- # Evaluation Concerns Accuracy: Might be too error-prone --- .left-column[ ## Assessing Accuracy] .right-column[ Prior probabilities - Probability before any observations (ie just guessing) - Ex. ML classifier to guess if an animal is a cat or a ferret based on the ear location - Assume all pointy eared fuzzy creatures are cats (some percentage will be right) - Your trained model needs to do better than prior Other baseline approaches - Cheap and dumb algorithms - Ex. Classifying cats vs ferrets based on size - Your model needs to do better than these too ] ??? We did this to study gender's impact on academic authorship; doctors reviews --- .left-column[ ## Assessing Accuracy] .right-column[ Don't just measure accuracy (percent right) Sometimes we care about *False positives* vs *False negatives* What examples do you know of false positives or false negatives? ] -- .right-column[ ![:img Hotdog vs not hot dog matrix, 55%,width](img/ml/hotdog.png) ] .footnote[[Image Source](https://blog.nillsf.com/index.php/2020/05/23/confusion-matrix-accuracy-recall-precision-false-positive-rate-and-f-scores-explained/)] --- .left-column[ ## Assessing Accuracy] .right-column[ | | | .red[Prediction] | | |-------------|--------------|----------------------------|----------------------------| | | | **Positive** | **Negative** | | .red[Label] | **Positive** | .red[True Positive (good)] | False Negative (bad) | | | **Negative** | False Positive (bad) | .red[True Negative (good)] | Accuracy = (TP + TN) / (TP + FP + TN + FN) Intuition: How many things did I get right in all of the total cases. ] --- .left-column[ ## Assessing Accuracy ## Precision ] .right-column[ | | | .red[Prediction] | | |-------------|--------------|----------------------------|----------------------| | | | **Positive** | **Negative** | | .red[Label] | **Positive** | .red[True Positive (good)] | False Negative (bad) | | | **Negative** | .ref[False Positive (bad)] | True Negative (good) | Precision = TP / (TP+FP) Intuition: Of the positive items, how many right? ] --- .left-column[ ## Assessing Accuracy ## Recall ] .right-column[ | | | Prediction | | |--------|--------------|----------------------------|----------------------------| | Actual | | **Positive** | **Negative** | | | **Positive** | .red[True Positive (good)] | .red[False Negative (bad)] | | | **Negative** | False Positive (bad) | True Negative (good) | Recall = TP / (TP+FN) Intuition: Of all things that should have been positive, how many actually labeled correctly? ] --- # Avoiding Overfitting .left-column50[ Overfitting: When your ML model is too specific for data you have - Might not generalize well ![:img overfitting, 80%,width](img/ml/overfitting.png) ] -- count: false .right-column50[ To avoid overfitting, typically split data into training set and test set - Train model on training set, and test on test set - Often do this through cross validation ![:img cross validation, 80%,width](img/ml/cross-validation.png) ] --- # How Machine Learning is Typically Used Step 1: Gather lots of data (easy on a phone!) Step 2: Figure out useful features Step 3: Select and train the ML algorithm Step 4: Evaluate metrics (and iterate) Step 5: Deploy --- # What makes this work well? Typically more data is better Accurate labels important Quality of features determines quality of results .red[*NOT* as sophisticated as the media makes out] -- count: false .red[*BUT* ML can infer all sorts of things] --- # AI/ML Not As Sophisticated as in Media A lot of people outside of computer science often ascribe human behaviors to AI systems - Especially desires and intentions - Works well for sci-fi, but not for today or near future These systems only do: - What we program them to do - What they are trained to do (based on the (possibly biased) data) --- # Concerns Significant Societal Challenges for Privacy (see our security lecture) -- count: false Wide Range of Privacy Risks | Everyday Risks | Medium Risk | Extreme Risks | |--------------------|---------------------|-------------------| | Friends, Family | Employer/Government | Stalkers, Hackers | | Over-protection | Over-monitoring | Well-being | | Social obligations | Discrimination | Personal safety | | Embarrassment | Reputation | Blackmail | | | Civil Liberties | | - It's not just Big Brother, it's not just corporations - Privacy is about our relationships with every other individual and organization out there --- # Concerns Significant Societal Challenges for Privacy Who should have the initiative? -- - Does person initiate things? Or computer? - How much does computer system do on your behalf? - Autonomous vehicle example: Some people think Tesla autopilot is full autonomous, leads to risky actions - Initiative matters - Instead of direct manipulation, some smarts (intelligent agent) for automation - Questions remain - What kinds of tasks should be automated / not? - Should "intelligence" be anthropomorphized? - How can user learn what system can and can’t do? - What are strategies for showing state of system? - What are strategies for preventing errors? --- exclude: true .left-column50[ ## Mixed-initiative best practices - Significant value-added automation - Considering uncertainty - Socially appropriate interaction w/ agent - Consider cost, benefit, uncertainty - Use dialog to resolve uncertainty - Support direct invocation and termination - Remember recent interactions ] .right-column50[ ![:img mixed initiative figure, 100%,width](img/ml/mixed-initiative.png) ] --- exclude: true .left-column50[ ## Mixed-initiative best practices - Significant value-added automation - Considering uncertainty - Socially appropriate interaction w/ agent - Consider cost, benefit, uncertainty - Use dialog to resolve uncertainty - Support direct invocation and termination - Remember recent interactions ] .right-column50[ ![:img mixed initiative figure, 100%,width](img/ml/mixed2.png) ] ??? Can see what agent is suggesting, in terms of scheduling a meeting --- exclude: true .left-column50[ ## Mixed-initiative best practices - Significant value-added automation - Considering uncertainty - Socially appropriate interaction w/ agent - Consider cost, benefit, uncertainty - Use dialog to resolve uncertainty - Support direct invocation and termination - Remember recent interactions ] .right-column50[ ![:img mixed initiative figure, 100%,width](img/ml/mixed3.png) ] ??? Uses anthropomorphized aganet Uses speech for input Uses mediation to help resolve conflict --- exclude: true .left-column[ ## Mixed-initiative best practices ] .right-column[ Built-in cost-benefit model in system - If perceived benefit >> cost, then do the action - Otherwise wait Note that this is just one point in design space (1999), and still lots of open questions - Ex. Should “intelligence” be anthropomorphized? - Ex. How to learn what system can and can’t do? - Ex. What kinds of tasks should be automated / not? - Ex. What are strategies for showing state of system? - Ex. What are strategies for preventing errors? ] --- # Concerns Significant Societal Challenges for Privacy Who should have the initiative? Bias in Machine Learning --- .quote[Johnson says his jaw dropped when he read one of the reasons American Express gave for lowering his credit limit: ![:fa quote-left] Other customers who have used their card at establishments where you recently shopped have a poor repayment history with American Express. ] ![:img news interview, 80%,width](img/ml/gma.png) --- ![:img bias figure, 60%,width](img/ml/bias.png) --- # Concerns Significant Societal Challenges for Privacy Who should have the initiative? Bias in Machine Learning Understanding ML -- count: false - How does a system know I am addressing it? - How do I know a system is attending to me? - When I issue a command/action, how does the system know what it relates to? - How do I know that the system correctly understands my command and correctly executes my intended action? .footnote[ Belloti et al., CHI 2002 ‘Making Sense of Sensing’ ] --- # Wrong location-based recommendation ![:img wrong, 60%,width](img/ml/wrong.png) Why did it not tell me about the Museum? How does it determine my location? Providing explanations to these questions can make Intelligent systems Intelligible Other examples: caregiving hours determined by insurance companies, etc --- # Types of feedback Feedback is crucial to user’s understanding of how a system works and helping guide future actions - What did the system do? - What if I do W, what will the system do? - Why did the system do X? - Why did the system not do Y - How do I get the system to do Z? --- # Summary ML and ethics - ML is powerful (but not perfect), often better than heuristics - Basic approach is collect data, train, test, deploy - Hard to understand what algorithms are doing (transparency) - ML algorithms just try to optimize, but might end up finding a proxy for race, gender, computer, etc - But hard to inspect these algorithms - Still a huge open question - Privacy - How much data should be collected about people? - How to communicate this to people? - What kinds of inferences are ok? --- # End of deck