mEnclosureListeners;
public final void addEnclosureListener(EnclosureListener listener) {
if (listener == null) {
throw new IllegalArgumentException("enclosureListener should never be null");
}
mEnclosureListeners.add(listener);
}
// And can call on them whenever they need with the right amount of time.
public void cacophony(int seconds)
for (EnclosureListener listener : mEnclosureListeners) {
listener->makeNoise(seconds);
}
}
}
```
---
# Callback Exercise
- The Zookeeper has NO idea what each enclosure will do until the listener is registered and called.
- Each enclosure separately defines how it will react to a makeNoise event.
- The "contract" between the zookeeper and the enclosures is the defined EnclosureListener interface.
---
# Callbacks in Java
- At the time Java has created the toolkit developers had *no* idea how every app would want to respond to events
- The toolkit has pre-defined interfaces so apps or components can respond to events such as clicks or touches. For example:
- The `View` class defines the following Listeners as inner classes:
- `View.OnClickListener`
- `View.OnLongClickListener`
- `View.OnFocusChangeListener`
- `View.OnKeyListener`
- `View.OnTouchListener` (this is used in ColorPicker and Menu by subclassing AppCompatImageView)
- `View.OnCreateContextMenuListener`
We will come back to these in our next class.
---
name: inverse
layout: true
class: center, middle, inverse
---
# Model View Controller, Input Devices and Events
---
name: normal
layout: true
class:
---
# How do you think an app responds to user input?
.left-column30[
graph LR
ap[Application Program]
hlt[High Level Tools]
t[Toolkit]
w[Window System]
o[OS]
h[Hardware]
classDef yellow font-size:14pt,text-align:center
class ap,w,o,h,hlt,t yellow
]
.right-column60[
What happens at each level of the hardware stack?
]
---
# How do you think an app responds to user input?
.left-column30[
graph LR
ap[Application Program]
hlt[High Level Tools]
t[Toolkit]
w[Window System]
o[OS]
h[Hardware]
classDef yellow font-size:14pt,text-align:center
class ap,w,o,h,hlt,t yellow
]
.right-column60[
What happens at each level of the hardware stack?
- Hardware level: electronics to sense circuits closing or movement
- Difference between hardware (Event vs sampled)
- Sensor based input
- OS: "Interrupts" that tell the Window system something happened
- Logical device abstraction and types of devices
- Windows system: Tells which window received the input
- Toolkit: defines how the app developer will use these events
- Events as an abstraction
- High level tools: Defines standard components that react to events in a certain way
- Application Program: Can use standard components OR create new interactions
- May define separate interaction techniques
]
---
# Responding to Users: Model View Controller (MVC)
.left-column30[
graph TD
View(View * ) --1-Input--> Presenter(Controller)
Presenter --5-Output --> View
Presenter --2-Updates-->Model(0,3-Model)
Model --4-State Change-->Presenter
classDef edgeLabel font-size:14pt
classDef blue font-size:14pt,text-align:center
classDef bluegreen font-size:14pt,text-align:center
class View,Presenter blue
class User,Model bluegreen
]
.right-column60[
Suppose this is a digital phone app
* User interacts through View (s) (the interactor hierarchy)
0 Model State: _Current person:_ Lauren; _Lock state:_ closed
1 Password entry. Trigger _Event Handling_
2 Change state: App unlocked
3 Model State: _Current person:_ Lauren; _Lock state:_ open
4 Change state of View(s)
5 Trigger _Redraw_ and show
]
???
Sketch out key concepts
- Input -- we need to know when people are doing things. This needs to be event driven.
- Output -- we need to show people feedback. This cannot ‘take over’ i.e. it needs to be multi threaded
- Back end -- we need to be able to talk to the application.
- State machine -- we need to keep track of state.
- What don’t we need? We don’t need to know about the rest of the UI, probably, etc etc
- Model View Controller -- this works within components (draw diagram), but also represents the overall structure (ideally) of a whole user interface
- NOTE: Be careful to write any new vocabulary words on the board and define as they come up.
---
# Responding to Users: Model View Controller (MVC)
.left-column30[
graph TD
View(View * ) --1-Input--> Presenter(Controller)
Presenter --5-Output --> View
Presenter --2-Updates-->Model(0,3-Model)
Model --4-State Change-->Presenter
classDef edgeLabel font-size:14pt
classDef blue font-size:14pt,text-align:center
classDef bluegreen font-size:14pt,text-align:center
class View,Presenter blue
class User,Model bluegreen
]
.right-column60[
Suppose this is a fancy speech-recognition based digital door lock instead
* User interacts through View (s) (the interactor hierarchy)
0 Model State: _Current person:_ Lauren; _Lock state:_ closed
1 Password entry. Trigger _Event Handling_
2 Change person to Lauren; App unlocked
3 Model State: _Current person:_ Lauren; _Lock state:_ open
4 Change state of View(s)
5 Trigger _Redraw_ and show
]
???
Sketch out key concepts
- Input -- we need to know when people are doing things. This needs to be event driven.
- Output -- we need to show people feedback. This cannot ‘take over’ i.e. it needs to be multi threaded
- Back end -- we need to be able to talk to the application.
- State machine -- we need to keep track of state.
- What don’t we need? We don’t need to know about the rest of the UI, probably, etc etc
- Model View Controller -- this works within components (draw diagram), but also represents the overall structure (ideally) of a whole user interface
- NOTE: Be careful to write any new vocabulary words on the board and define as they come up.
---
# Model View Controller (MVC)
From [Wikipedia]()
>MVC is a software design pattern commonly used for
developing user interfaces which divides the related program logic into three interconnected elements.
- *Model* - a representation of the state of your application
- *View* - a visual representation presented to the user
- *Controller* - communicates between the model and the view
- Handles changing the model based on user input
- Retrieves information from the model to display on the view
--
MVC exists within each View as well as for overall interface
---
# MVC in Android
.left-column30[
graph TD
View(Passive View) --user events--> Presenter(Presenter:
Supervising Controller)
Presenter --updates view--> View
Presenter --updates model--> Model(Model)
Model --state-change events--> Presenter
classDef edgeLabel font-size:14pt
classDef blue font-size:14pt,text-align:center
classDef bluegreen font-size:14pt,text-align:center
class View,Presenter blue
class Model bluegreen
]
.right-column60[
Applications typically follow this architecture
- What did we learn about how to do this?
- What causes the screen to update?
- How are things laid out on screen?
]
???
- Relationship of MVC to Android software stack
- Measure and layout
- Discuss Whorfian effects
--
.right-column60[
Responding to Users: Event Handling
- When a user interacts with our apps, Android creates **events**
- As app developers, we react "listening" for events and responding
appropriately
]
---
|Procedural | Event Driven |
| :--: | :--: |
|![:img Code printout saying Statement 1; Statement 2; Statement 3, 60%](img/events/procedural.png)|
![:img Code printout saying Method 1; Method 2; Method 3 with mouse and keyboard icons causing events pointed at different methods, 60%](img/events/eventdriven.png)|
|Code is executed in sequential order | Code is executed based upon events|
---
# But what is an Event?
Generally, input is harder than output
- More diversity, less uniformity
- More affected by human properties
---
# Where does and Event come from?
Consider the "location" of an event...
What is different about a joystick, a touch screen, and a mouse?
???
- Mouse was originally just a 1:1 mapping in 2 dimensions == absolute
location; bounded
- Joystick is relative (maps movement into rate of change in location); unbounded
- Touch screen is absolute; bounded
- What about today's mouse? Lifting and moving?
--
- Mouse was originally just a 1:1 mapping in 2 dimensions == absolute
location; bounded
- Joystick is relative (maps movement into rate of change in location); unbounded
- Touch screen is absolute; bounded
--
What about the modern mouse? Lifting and moving?
--
How about a wii controller?
---
# Is this an input device?
.left-column-half[![:img Picture of swipe keyboard showing text entry of satisfying, 60%](img/events/swipe.png)]
--
.right-column-half[No … it’s an interaction technique. Over 50 WPM!]
???
Who/what/where/etc
Dimensionality – how many dimensions can a device sense?
Range – is a device bounded or unbounded?
Mapping – is a device absolute or relative?
--
.right-column-half[
Considerations:
- Dimensionality – how many dimensions can a device sense?
- Range – is a device bounded or unbounded?
- Mapping – is a device absolute or relative?
]
---
# Interaction techniques / Components make input devices effective
For example, consider text entry:
- 60-80 (keyboards; twiddler)
- ~20 (soft keyboards)
- ~50? Swype – but is it an input device?
---
# Modern hardware and software starting to muddy the waters around this
![:img Picture of OLED keyboard with labels on keys for gaming instead
of typing, 30%](img/events/oled.png)
???
Add OLEDs to keys -> reconfigurable label displays
---
# Higher level abstraction
Logical Device Approach:
- Valuator (slider) -> returns a scalar value
- Button -> returns integer value
- Locator -> returns position on a logical view surface
- Keyboard -> returns character string
- Stroke -> obtain sequence of points
???
- Can obscure important differences -- hence use inheritance
- Discussion of mouse vs pen -- what are some differences?
- Helps us deal with a diversity of devices
- Make sure everyone understands types of events
- Make sure everyone has a basic concept of how one registers listeners
---
# Not really satisfactory...
Doesn't capture full device diversity
| Event based devices | | Sampled devices |
| -- | -- | -- |
| Time of input determined by user | | Time of input determined by program |
| Value changes only when activated | | Value is continuously changing |
| e.g.: button | | e.g.: mouse |
???
Capability differences
- Discussion of mouse vs pen
- what are some differences?
---
# Contents of Event Record
Think about your real world event again. What do we need to know?
**What**: Event Type
**Where**: Event Target
**When**: Timestamp
**Value**: Event-specific variable
**Context**: What was going on?
???
Discuss each with examples
---
# Contents of Event Record
What do we need to know about each UI event?
**What**: Event Type (mouse moved, key down, etc)
**Where**: Event Target (the input component)
**When**: Timestamp (when did event occur)
**Value**: Mouse coordinates; which key; etc.
**Context**: Modifiers (Ctrl, Shift, Alt, etc); Number of clicks; etc.
???
Discuss each with examples
---
# Input Event Goals
Device Independence
- Want / need device independence
- Need a uniform and higher level abstraction for input
Component Independence
- Given a model for representing input, how do we get inputs delivered
to the right component?
???
---
# Summary
- Callbacks: a programatic way to get or send information from/to our program from the system
--
- MVC: Separation of concerns for user interaction
--
- Events: logical input device abstraction
--
- We model everything as events
- Sampled devices
- Handled as “incremental change” events
- Each measurable change: a new event with new value
- Device differences
- Handled implicitly by only generating events they can generate
- Recognition Based Input?
- Yes, can generate events for this too
---
# End of Deck