Notably the old sequencer calculated for a given frame number the results of all tracks regardless if they are needed or not.
Even worse it never freed them after that. You had to click on refresh now and then to get rid of the cache. The final rendered freed the cache automatically after each frame thereby disabling the advantages of the cache completely. You will notice this misbehaviour if you used the scene tracks for titles on the timeline. Calling the renderer for each frame again and again can be somewhat slow...
We have to limit the cache usage in some way to an user provided upper limit. (32 MB by default, but can be changed in the user preferences.)
Mathematically speaking there is a ordering relation on cache elements that tells us which elements have to be kicked out first. This is implemented using a "deletion chain".
Since this depends deeply on the data in the cache we give the control to the user of the cache. That also means that there is not one global cache but one for each problem.
This gives us four basic operations:
This gives us three additional operations:
class BigFatImage { public: ~BigFatImage() { tell_everyone_we_are_gone(this); } }; MEM_Cache<BigFatImage> BigFatImages; void doit(BigFatImage * b) { MEM_Cache_Handle<BigFatImage>* h = BigFatImages.insert(b); h->ref(); BigFatImages.enforce_limits(); h->unref(); h->touch(); h->ref(); work with image... h->unref(); leave image in cache. }
void delete_MEM_CacheLimiter(MEM_CacheLimiterC * This)
Delete the cache limitor keeping it's elements untouched.
MEM_CacheLimiterHandleC * MEM_CacheLimiter_insert(
MEM_CacheLimiterC * This, void * data)
Add an element to the cache management system and return a handle to
control it's behaviour in the cache.
void MEM_CacheLimiter_enforce_limits(MEM_CacheLimiterC * This)
Enforce limits on the cache.
void MEM_CacheLimiter_unmanage(MEM_CacheLimiterHandleC * handle)
Unmanage an element not deleting it!
void MEM_CacheLimiter_touch(MEM_CacheLimiterHandleC * handle)
Touch an element.
void MEM_CacheLimiter_ref(MEM_CacheLimiterHandleC * handle)
Increase the reference count of an element there by preventing it to
be deleted by enforce_limits().
void MEM_CacheLimiter_unref(MEM_CacheLimiterHandleC * handle)
Decrease the reference count of an element.
int MEM_CacheLimiter_get_refcount(MEM_CacheLimiterHandleC * handle)
Get the current reference count for debugging purposes.
void * MEM_CacheLimiter_get(MEM_CacheLimiterHandleC * handle)
Get the raw data pointer from the cache limitor handle.
void IMB_cache_limiter_unmanage(struct ImBuf * i):
Release this ImBuf from the manager.
void IMB_cache_limiter_touch(struct ImBuf * i):
Touch the ImBuf.
void IMB_cache_limiter_ref(struct ImBuf * i):
Increase the reference count of an element there by preventing it to
be deleted by insert(). This has nothing to do with the function
IMB_refImBuf()!!!!
void IMB_cache_limiter_unref(struct ImBuf * i):
Decrease the reference counter.
int IMB_cache_limiter_get_refcount(struct ImBuf * i):
Get the reference counter for debugging purposes.
Luckily the designers of the STL thought of this possibility and added the feature of "allocators" to the STL.
If you use the STL within blender you are encouraged to use the allocator provided by memutil/MEM_Allocator.h.
You have to replace
std::list<int>;by
std::list<int, MEM_Allocator<int> >You may want to add typedefs to save you from the pain of writing this all the time... ;-)