Hi there

here a couple of thoughts of mine about redis

 

When dealing with Azure Redis Cache programming, lot of timeout exceptions occurs
although this is actually frustrating, any distributed programming across network may expose to such issue
it seem to be a frequent and widely diffused issue with Azure Redis Cache with any size and geographic localization
a search on the web will easily find lot of posts on forums like this

the simplest solution is an architectural suicide of the cache: a retry logic!

 

Optimizing cache retrieval with item lifecycle

the first thing to do when reading across Redis, to avoid impressive network usage, is an item existence check

By invoking KeyTimeToLive method we can definitely know how much life an object still have. Although this solution will no help when dealing with frequently changing items from different sources, it may became a good and easy solution for slow (or never) changing items

here an example

 

var connection = ConnectionMultiplexer.Connect(myConnectionString);
var proxy = connection.GetDatabase();

    var life = proxy.KeyTimeToLive(key, CommandFlags.HighPriority); //with high priority a little better latency will result
    if (!life.HasValue || life.Value.TotalMilliseconds < 100)
    {
        //a new value must be created and set into the cache
    }
    //else
    //the value is still in cache, so we can use any local cache
    //or local variable without the need of reading again from cache

 

look that although this solution will increase your application throughput and reduce network resource usage, latency will be not reduced too much, because any cache item lifecycle check will cost of a round-trip to the cache!!!

Lascia un commento