Transformations preserving linearity
If a linear object is defined by coherent addition and scaling, what should a map between two such objects be required to respect?
What does it mean for a transformation to count as genuinely linear?
Once vector spaces have been identified, the central question shifts from objects to maps.
The structurally meaningful functions between vector spaces are the ones that preserve the pattern that makes the setting linear in the first place.
If addition is part of the structure, then a structure-preserving transformation must respect addition:
f(u + v) = f(u) + f(v)
If scaling is part of the structure, then the same transformation must also respect scaling:
f(a·v) = a·f(v)
Those two requirements define a linear map.
They say that the map fits the ways linear objects are allowed to combine. Add first or map first: the result agrees. Scale first or map first: the result agrees again.
V ──f──▶ W
│ │
+ +
│ │
V ──f──▶ W
(commutation = linearity)
The diagram is schematic. Its point is that the map fits the additive structure and commutes with it. The same idea applies to scalar multiplication as well: a linear map commutes with the permitted operations.
A few examples make the restriction clearer. The map
f(x, y) = (2x, 2y)
is linear. It doubles every contribution while preserving separate contributions. If two inputs are added first, the result is the same as doubling each separately and then adding.
Likewise,
f(x, y) = (x + y, y)
is still linear. It mixes the coordinates, but only by adding and scaling. No products such as xy or powers such as x^2 appear.
By contrast,
f(x, y) = (x^2, y)
falls outside linearity. The square moves the expression into a nonlinear regime, and doubling the input sends the output in a different direction from simple doubling.
This is the structural role of linear maps: they preserve the disciplined world created by forbidding interaction terms. They are the allowable changes of viewpoint inside linear structure.
And once the right transformations have been identified, something important becomes possible. They compose.
If f: V -> W and g: W -> X are both linear, then doing f and then g is still linear. The reason is straightforward but crucial. If each map respects addition and scaling, then their composite respects addition and scaling too. The linear world is closed under both its internal operations and its structure-preserving transformations.
That gives linear algebra one of its central organizational advantages. It is a collection of vector spaces that becomes a world whose objects and morphisms fit together coherently.
There is also an identity transformation: the map that leaves every element unchanged. It is linear because it plainly preserves both addition and scaling. So linear maps come with identity and composition, exactly the ingredients needed for a stable category of linear objects.
With the right transformations in place, linear algebra becomes a setting in which behavior can be transported, compared, and composed while staying within the structure.
Features of a vector space lie at different structural levels. Some properties change under a particular linear map, while others survive every isomorphism and therefore belong to the structure more deeply.
If linear maps are the correct transformations of vector spaces, what information remains invariant under them?
And what structural quantity measures how many independent directions a linear world really has?