Caching
(1/5) Normalization
One of the big advantages of deeply immutable objects is that two structurally identical objects are referentially transparent, that is, you can not distinguish whether they are represented in memory by one object or two.
This means that it is possible to reuse the same objects to save memory. While in other languages the programmer would have to implement some specific support to reuse objects, in L42 this is supported directly in the language, by a process called normalization .
An immutable object can be turned into its normalized version using
Consider the following richer example: Normalizing an object normalizes the whole ROG. In the example, normalizing the two dogs also normalizes their owners, to the same normalized object. All of the dogs' fields are then replaced with the normalized versions, so the two dogs now share an owner object. Note that bob1 and bob2 are still two different objects.
The logic needed for normalization is the same needed to check if two arbitrary objects are structurally equal, to print an object to a readable string and to clone objects. Thus data allows for all of those operations indirectly relying on normalization. Those are all operations requiring to scan the whole ROG of the object, so the cost of normalization is acceptable in context.
(2/5) Lazy Caching
Some methods may take a long time to compute, but they are deterministic, and thus we could cache the result and reuse it many times. A typical example is Fibonacci:
This Fibonacci implementation would take a very long time to run, since it would require recomputing the same results an exponential amount of times. This tweaked implementation relying on caching is much faster. As you can see, instead of a method with parameters we can declare a class with fields and an empty named method doing the actual computation.As you can see, the caching is is completely handled by the language and is not connected with the specific algorithm. This pattern is general enough to support any method from immutable data to an immutable result.
(3/5) Automatic parallelism
When decorated by
An important consideration here is that both
(4/5) Invariants and derived fields
We have seen that cached behaviour can be computed lazily or eagerly on immutable objects. But we can bring caching even earlier and compute some behaviour at the same time as object instantiation. This allows us to encode derived fields: fields whose values are completely determined by other values in the same object. Consider the following example:
where the class
We can build on this behaviour to encode class invariants:
=0Double && y>=0Double) error X"""% | Invalid state: | x = %x | y = %y """ } ]]>Now, every time user code receives a
(5/5) Summary
In 42 immutable objects, can be normalized in order to save memory.
This works also on circular object graphs. In case you are interested in the details, it relies on a variation of DFA normalization.
As a special case, objects without fields (immutable or not) are always represented in memory as a single object.
Results of
There are three kinds of caching, depending on the time the caching behaviour activates:
-
« » computes the cached value when the annotated method is first called. It works on« » and« » no-arg methods. An obvious workaround for the no-arg limitation is to define computation objects; this also works well with normalization: computation objects will retrieve the cached results of any structurally equivalent object. -
« » computes the cached value in a separate parallel worker, starting when the object is created. It only works on« » no-arg methods of classes whose objects are all deeply immutable. Those classes will automatically normalize their instances upon creation. -
« » computes the cached value during object construction. Since the object does not exist yet, the annotation can only be placed on a« » method whose parameters represent the needed object fields. This annotation does influence the observable behaviour. If there is no error computing the« » methods, then the fully initialized object is returned. But, if an error is raised computing the cache, instead of returning the broken object, the error is leaked during object construction.
This, in turn, allows us to encode class invariants and to provide a static guarantee that users of a class can rely upon.