Laster

Video transcript

- [Voiceover] So in the last video, I introduced transformations and how you can think about functions as moving points in one space to points in another, and here I want to show an example of what that looks like when the input space is two-dimensional. So this over here is the input space. It's just a copy of the XY plane, and the output space is also two-dimensional, so the output space, in this case, also a two-dimensional plane. And what I'm gonna do, I'm just gonna first play an example of one of these transformations and then go through the details of the underyling function and how you can understand the transformation as a result. So here's what it looks like. Here's what we're gonna be going towards. Very complicated, a lot of points moving. Lots of different things happening here, and what's common with this sort of thing, when you're thinking about moving from two dimensions to two dimensions, given that it's really the same space, the XY plane, you often just think about the input and output space all at once, and instead just watch a copy of that plane move onto itself. And by the way, when I say watch, I don't mean that you'll always have an animation like this just sort of sitting in front of you. When I think about transformations, it's usually a very vague thought in the back of my mind somewhere, but it helps to understand what's really going on with the function. I'll talk about that more at the end, but first let's just go into what this function is. So, the one that I told the computer to animate here is f of x,y is the input, is equal to x squared plus y squared, as the x component of the output, and x squared minus y squared is the y component of the output. So just to help start understanding this, let's take a relatively simple point like the origin. So here, the origin, which is zero, zero, and let's think about what happens to that. f of zero, zero. Well, x and y are both zero, so that top is zero. And same with the bottom, that bottom also equals zero. Which means, it's taking the 0.00 to itself, and if you watch the transformation, what this means is that the 0.0 stays fixed, it's like you can hold your thumb down on it, and nothing really happens to it. And in fact, we call this a fixed point of the function as a whole, and that kind of terminology doesn't really make sense unless you're thinking of the function as a transformation. So let's look at another example here. Let's take a point like one, one. f of one, one. So in the input space, let's just kind of start this thing over so we're only looking at the input. In the input space, one, one, is sitting right here, and we're wondering where that's gonna move. So when we plug it in, x squared plus y squared is gonna be one squared plus one squared, and on the bottom, x squared minus y squared, one squared minus y squared. Woop, (laughs) minus one squared. I'm plugging things in here. So that's two, zero. Two, zero. Which means we expect this point to move over to two, zero in some way. So if we watch the transformation, we expect to watch that point move over to here, and again, it can be hard to follow because there's a lot of moving parts, but if you're careful as you watch it, the point will actually land right there. And you can, in principle, do this for any given point and understand how it moves from one to another, but you might ask, hey Grant, what is the point of all of this? We have other ways of visualizing functions that are more precise, and kinda less confusing, to be honest. Factor fields are a great way for functions like this, graphs were a great way for functions with one input and one output. Why think in terms of transformations? And the main reason is conceptual. It's not like you'll have an animation sitting in front of you, and it's not like you're gonna by hand evaluate a bunch of points and think of how they move. But there's a lot of different concepts in math, and with functions, where when you understand it in terms of a transformation, it gives you a more nuanced understanding. Things like derivatives, or the variations of the derivative that you're gonna learn with multi-variable calculus, there's different ways of understanding it in terms of stretching or squishing space and things like this, that doesn't really have a good analog in terms of graphs or vector fields. So it adds a new color to your understanding. Also, transformations are a super important part of linear algebra. There will come a point when you start learning the connection between linear algebra and multi-variable calculus. And if you have a strong conception of transformations, both in the context of linear algebra and in the context of multi-variable calculus, you'll be in a much better position to understand the connection between those two fields.