Side 1 turn down a little exercise instead of Hall of Fame shame today so I know I'm early but if you if you were wondering what to do with the next 2 minutes look at this come up with some things that cause you to react that you might be class just something that you experience that you then reacted to yeah just or have them ready to discuss if you just walking in or you're right here when I said it we're going to start class out while we're waiting for people with you thinking about something that you experienced and then reacted to that very mundane an example that you shouldn't use cuz I'm about to give it to you is like you went outside and we're rained on and maybe went back inside and got an umbrella that would be a reaction to something to think about things that you reacted to this week anyone have an experience that you reacted to do you want to share my cat jumped up on the counter and I liked it any other experiences that you reacted to. Thank You Lauren are some things that you react to every day have a setup like this is going to happen and then I'm going to do that my alarm goes off I get out of bed right either way slightly over room but I was dancing that's a great example sorry I didn't know in and out kind of reaction and yeah so that's a nice example of what I would call a feedback loop examples something you do everyday Jake I love it and that's not necessarily one that you like have to learn I mean you learned it at some point but it's kind of internalized at this point right start smelling food and then being hungry nice example one more just tell me your name again thank you these are great examples right so this is what I would call event-driven interaction each of these things there's some kind of event that's causing maybe with the exception of the there's something happening you know implicit all of this Ferry explosion the food kids actually is there some kind of sensory input right so you're you're you're interpreting data that's coming in and then there's output you're doing something that affects the world in most cases is more of an internal change so you're changing your internal changing the world around you so those are two different kinds of ways that you can create change that concept of like internal State input and output and we're also going to talk about events which is better how we use our interface programming encapsulate the next set of three lectures leading up to the midterm is going to be about the process by which the interface figures out how to react to events which will then have an assignment on active but we're going to be talking about that whole process while you're doing Slide 4 okay so today again we're going to talk about and have differences between hardware and interaction techniques which has to do with what is an input of Ed versus more of a feedback loop We're going to talk about the abstractions we used to describe the vents a little bit about sensor-based input and how we use event is an abstraction I want attention that there is a canvas quiz up on our class experiment. Please fill it out so that we can complete the experiment and even if you didn't partake in the experiment. It's up to you it's Anonymous but we do have some questions that even for people who didn't name tags or forgot them every time we still want to hear what you have to say so please do that Aalso today's class is going to be a really different structure I'm going to talk for the first 40 30 minutes of class about events and hopefully do things with you through the vents and then we're using a College of Engineering service to get feedback on the class and particularly this is always important to participate has been taught it's probably less visible than it was last time that we are doing a lot of work Under the Sea on the Fly kind of to fix things as well as work that we prepared in response to a friend try to get this settled into a form that we can use and reuse you know twice a year for the coming many years the feedback in our class So please stick around for that when it starts and I'll turn off the Tanakh doing all the recordings and stuff when that happens and what how what will happen is that he's going to talk to you about key things that we wanted feedback on as well as a soliciting from you what you might want to give us feedback on and then that's going to be summarized in an anonymous report that will discuss with Ken Slide 5 so we just talked about you responding to and put and I'm wondering if you have any thoughts about how an app might respond and I'm realizing I really want a piece of paper when I can use only one see if that works okay so talk to me or talk to your neighbor and tell me you know What do you think happens when- what are some of the things that have to happen from the moment that the user does something? So what's a kind of input that these are might do on a computer or a phone touch the screen or press the key or something Right, we know that happened. How do we get all the way from that what's the end result what's the last thing that happens in this chain that we don't really know about yet? turn on the screen Queen those two at the programming level for all of that to work like what is the what's what's actually we know how outputs drawn on the screen. Right but how do how does it actually happen that that an input triggers whatever is changed in order for something to show up on the screen? somebody has to listen for the event write something has to be else asked her to go back a level notify me to see if that comes in and we could come in and do operating system I know better some driver right then If it doesn't get delivered to something else we can't react to it in the app. So something has to be notified what have to be notified what's actually drawing on the screen What has to be notified you've implemented sending did you implement and doodles that draws on the screen Android what kind of object is Android in then? It's in a canvas. What is it in what what did you create a custom version of in Doodle Have you write so down here. we have some kind of review which one we're talking more generally I call an Interactive okay can you see that and that's drying out put on the screen right and up here we have some kind of input I'm going to call it an input event right so we're sort of working out this chain so here's our output that's being drawn and some interactor has to somehow know about the view in order to update what it's trying right so what kind of questions might we does the toolkit need to answer in order for that to happen let's say you have an interface with many other actors on it I have to figure out Android has to figure out where you touch what then what does it have to do with that information about where you touched so we usually we deliver the event to the location to the thing that's under where you touch right so somewhere in here we have to pick and interactor I'm using the word pick because it's a technical term we're going to get to later this week right has to pay can interact her to give the input to write clearly that has to happen and that's a service that took it provides your interactor doesn't have to do a lot of work for that to happen properly The toolkit figures out where it should go right what is the interactor have to do what's your inner actress told somebody clicked how does it translate that into the information about what specifically to draw. It's okay if you don't know the answer to that question cuz we haven't talked about it yet but think about that right it's getting something like precedex Y and it's doing something which could vary significantly you just press on a menu it might display a bunch of new things right like new menu items that weren't visible until you cry there's like a really wide array of things that end in ER actor can do in response to the input okay so the inner and not only that but let's say you press on a menu after it's open am I close it right so the way that any given interactor reacts varies based on what what's the difference when I press and it's open and I press and then closes what what property of the interactive might be what what might we be looking at to make that decision of which thing to do call Davis State right as in a certain State and depending on what state it's in it responds differently to the input so the interactive has to we going to call that execute its state machine which is like a very simple program that's supposed to specify hug this interactive response to thanks okay and then because abstractions and separation of concerns and stuff what we're actually going to do that as we're going to notify the toolkit that there's a change right and then it's going to say oh by yourself then so they're kind of like you would think it would just execute and change itself and just drive myself but it's going to have to notify the Toca and at the same time it's going to update any models so this might be a call back to the application saying they've selected save you should like or they've changed what's in this drawing program you should you should store that information for all future ever write or for some short amount of time or it might just update the interactive model in which we call it feedback loop like showing the menu and hiding it that's not going to change so either way there's usually some kind of update to some model sort of like am I hungry and then that is fed back into the operating into the toolcat along with the fact that something has changed it visible and then we then we invoke Sondra code we're going to go over this a lot more over the next couple of lectures but this gives you a sense of some of the things that have to happen under the scenes for the simple action of a change in output in response to any questions about that Slide 6 go back to protecting that's a high level overview of what we're going to do there's another way we can look at this it's also really important to that sort of What we just did is kind of the toolkit process but the way that we handle this and instructions is this so what I'm showing you here is that piece with the state change the model change and the view. Okay and I'm just giving you an example so suppose this is a digital phone app and the user is pressing on The View or the interactor right so that's how the user interacts they don't just interact with the interactor by pressing on it it's not just an input acceptor is also what shows the output. Right that's what we just drew here it's drying out. So the view is the entire way of interacting with the hole in her face if you think a rocky or a specific so then imagine that I've drawn like a human above the words you hear call a controller which is just an abstraction for the piece of code that figures out what to do with the input that's going to be our state machine in most cases fat controller what the current state of our site that should be our model state When the interaction starts is the access list has Jen and Adam in it this is an app that is allowing somebody to unlock something and use it okay so zero: model state has this current state so that that model is so you could think of it like a database almost is the current state of the world before interaction starts. And the model is like a persistent capturing of the state of the world the first thing you should have is password entry, with triggers event Hamlin up here, so that the controller to notify the model the state and now we have a new model state. cuz the person whose interacting now is Jen that's they've identified himself. okay and then this leads to a change of the state of The View, which triggers redraw. So this concept of model view controller is trying to capture the fact that there's like this intermediary a piece of code which is usually implemented in view that is made but not always right that's making decisions about what to do with input and how to update the state of the few and what to display in all of that. And you can think about a model view controller kind of as describing a whole interface. So if this is my app with my unlock screen right name and password say all right that's got an interactive hierarchy that has a bunch of different things in it right that whole thing uses a model view controller as its basic architecture and the underlying models for my application might have data about who I like to call or whatever that does Sprite But we can also talk about a model view controller just for something like this text entry box so there's kind of it a model view controller inside every single interactor was just has the interactive current state like is there text in the Sentry Bots are not right. So for this text entry the state might be you know I have types three of the characters in my username into it so that it's model right now is that that's the current text that is visible and everytime I type a new character is deciding how to update itself right so we have this is the model and the view is just going to show you know JMA with a box around it right and the controller is handling keyboard input and updating that View. That's all within that single text entry box all of that has to function for it to show feedback. But in addition there's a larger model view controller for that whole application, which is saying okay when that when I get a event telling me that they finish entering their username and password I check my model for who's logged in I update it to say that Jack J makeup is now logged in. I change the interface so that instead of showing a name and password it shows I don't know if you know my list of contacts or whatever that actually does right that's a model view controller as well. But it's it's not at the level of a pool application do the same abstraction is useful in thinking about how we inform us both of those that make sense so you're going to do in the next assignment the Color Picker one the one after accessibility you're going to build a custom interactor that has a model view controller and can handle and put for a setting a color. Right which is just a single View and then the assignment after that which is actually two after that which is undue you're going to build a whole drawing app and each of those will have a model, right, but you can use the Color Picker in the drawing program because that holds it all together is this is healing very abstract it will get more concrete over the course of this quarter but I think it's really important to have this high-level introduction before we dive into specifics Slide 7 oops and I want to mention that this concept just saying as I for one more second really doesn't matter the specifics of how I Implement my interface so I have a digital door lock on my front door right And if I walk up to it let's say it fancier than actually is maybe it has face recognition or speech recognition identify myself it doesn't have to be typed input events for this model view controller architecture to still make sense. I can identify Myself by saying unlock door it changes who I am it changes the state of my view which is the physical lock in the door right it causes the bolt to move back that's out and now my doors open so that exact same sequence just a different interface is still using a model view controller. they're still in just a different kind of event a different kind of output so that's really important to understand this kind of thing. We might even talk about an exam is like imagine this app what would be the event what would be the output in this kind of scenario is something to try to hold on to. Slide 9 In Android the same kind of thing happens and we talked about Andhra is used for screen update we talked about how things are laid out on the screen but it's this architecture underneath that but we haven't talked about it. Slide 10 So that's what we're going to get into now so first of all let me just talk about what event handling is because a lot of the programming many of you have done until now it's probably what I would call procedural and a very ordered fashion a sequence of things. Slide 11 Even some of the things we've done so far in class like the animation will you set up kind of you set up some things and then you invoke something and it executes and it's sort of all in an order, right? But when you're doing event handling code is executed based on events and you have little Snippets of code that are each associated with a specific situation and event like in this view is a click run this snippet of code in this you if they quit run this snippet of and so you get what we sometimes I can be really hard to debug because when you're trying to track down an error it's not like you can read through your code linearly answered of executed in your head and look for where something is not going right you have to understand the flow of information in order to debug something like that and it's all in many many many different places Slide 12 So now let's talk about what is an event. So first of all it's much harder to attractions for input or output level instructions for drawings on the screen right we have a series of different kinds of things shows up on the screen and I'll have a fairly straightforward. there's a lot more diversity it's the way that we do output in most of the interfaces we will build is visual When we're dealing with input there's just just a lot more diversity so let me give you an example cuz you might be sitting here thinking but Jen everything is basically a mouse or keyboard right, how diverse is that? Slide 13 so let's look at a couple of examples of mouse input anyone know what might be different between a joystick a touchscreen and a mouse when it comes to the kind of events they generate and I want you to talk to your neighbor and think about like are they all giving me you know an XY location and a two-dimensional space or is there more to it than that how's your neighbor see if you can come up with at least one difference between each of these things okay anyone give me something that you think is true about a mouse but not about a touch screen or any of these other combination season I'm driving my place to place for like so so Mouse has I'm going to call this a.m. a movement let's call it right where is with a touch screen is there's a crash right everything is a press and a touch screen Champs press to press Slide 14 ideas about differences specifically to describe the mouse pointer location is dependent on how you move it based on the location it is you're giving inspections for so the Mouse has a relative I would say we're going to get to buy the right relative but it was right and the joystick the joystick is it's mapping change to location right is mapping change in what in pressure boyfriend it's a force I think it's changing force right and this is change in position that might have interesting impacts it was some joysticks the harder you press the faster it moves for example right so maybe it's napping change enforce to speed even and direction right other differences multiple fingers yeah I'm just trying to come up with a short way of saying that that's great bounded and which are bounded and does that even make sense don't even think about going to add a few things a touch screen has absolute positioning so it's not relative to anything where you press Is What You Get Right it's also bounded by the size of the screen a mouse is kind of you can go as far as you can go and depending on how fast you go you might go further or last to get to the same area depending on what kind of skin we have and in modern rice we also pick up and put down repeatedly so what is that how does that feel right so these are just some of these things are going to show up in the event that you got in some of them are not actually like the fact that you pick up a mouse is going to show up only implicitly as a pause in your event stream but when you move it over and you're going to continue getting events at the location of the location of the mouse is not going to show up right but the difference between a touchscreen Which Wich can't generate Mouse move events and a mouse which can generate is going to show up in the actual events that you can get that make sense OK Google five more minutes and still give you enough time to do what you need to do okay we're not going to finish Slide 16 some things we just discussed about Mouse I'm going to leave you to think about a Wii controller and assertive point out that like three dimensional versus two dimensional is an interesting thing and these are all just things that are generating location style input right we haven't gotten into keyboards at all Slide 17-21 I also want to point out that an input device especially on a phone can actually also be an interaction technique right so this is something we're when I'm drawing on that soft keyboard you can think of myself keyboard as a kind of interactor which itself generates events so it's taking touch events as input and it's generating keystrokes as output right and so I don't think you can Define what input is produced without considering the context that is used in because like that's that is just a touch screen and yet it's also generating text events right so that that complicates things further okay and you can actually get to over 50 words per minute with this interaction techniques and the limit for even regular soft keyboards and if we want to think about soft keyboards assertive less interaction and more input he even though they're still translating processing to keystrokes the limit in every single design it was a huge leap forward due to interaction techniques design we're going to talk more about interaction techniques design later in the quarter so I'm going to just say forget to this logical device approach cuz I think it's really important and then we're going to finish for the day so the higher level of abstraction that I want you to think about for input event is what kind of values return says it's Keller as a tennager right is it a character string is it a sequence of points that's kind of a logical device approach so we can take all of these different differences between them and we can still generate essentially XY location for almost all of them right or all of them but that still doesn't capture everything that we care about Slide 24 like for example is it an event this device was sampled device so Mouse kind of we generate events from it but it's kind of continuously producing a location right so that's that's not captured there and user interface told kids we always sample sample if we have continuous sample devices means we're turning them into events so what happened to the driver looks at the mouse and checks its position and generates Mouse events but it's not you can't tell that it's continuously producing data when you're using it in the interface you can think of everything is being event-based and then we have event records that's the last so we have this logical model and we have to capture this in a piece of code right Slide 25 So we returned everything but we still need to deliver an object that has the key information that describe the events that are being generated and that's what this event record does so what are the kinds of things that you need to know you need to know the type of the event if there's any information about its Target you need to know that I would argue you really need to know the time spent down like when it happened there maybe in that specific variables that are important like was the control key pressed when I pressed my mouse button to click on something I might show something different than if it's not right and I need to know tensely what else is going on so here's an example what is my translate into you know the event type might be that the most moved or the masses pressed or the key was down right we might know we not might want to know what input component the event targeted and we'll talk later about how that information gets filled in cuz that's not coming from the driver right that's coming from the Toltec we need to know when it occurred that's easy it's the timestamp their values such as the XY location of the pointer right and then contact spice act like a Control shelter all key okay so I think I'm going to stop there yeah we all get to leave