A distributed object cache is a system that allows one program to store an object in memory and have it available to another program on another computer. Any node can store and make changes to the object. Caches make huge performance improvements in certain workflows. One of the more popular is Memcached, which I was hoping to use but was disappointed when I found that the devs have not made a .NET client library. I would rather use one made by the Memcached team than some third party.
Making an object cache in .NET is pretty straight forward when you think about it – its just a key-value pair with items that timeout after some time interval. Making it work over the network is potentially difficult in a decentralised design but WCF 4 has the wonderful NetPeerTcpBinding that does all the tricky work of discovering, joining and leaving a virtual mesh of nodes in the distributed cache via a peer-to-peer (P2P) mesh. Actually NetPeerTcpBinding was introduced in 3.5 or possibly 3 but WCF 4 is so much easier on the whole but I digress.
Basically NetPeerTcpBinding acts very much like NetTcpBinding so get your program to work with that first. To transition to NetPeerTcpBinding you merely change your binding – no other changes to program flow are required.
Here’s a snippet of my WCF cache host config file:
<bindings>
<netNamedPipeBinding>
<binding name="noSecurityPipeCofig">
<security mode="None" />
</binding>
</netNamedPipeBinding>
<netPeerTcpBinding>
<binding name="noSecurityP2PBinding">
<security mode="None" />
</binding>
</netPeerTcpBinding>
<netTcpBinding>
<binding name="tcpNoSecurityConfig">
<security mode="None" />
</binding>
</netTcpBinding>
</bindings>
<services>
<service behaviorConfiguration="myServiceBehaviour" name="Schmicky.Cache.Services.CacheService">
<clear />
<endpoint address="announcements" binding="netPeerTcpBinding"
bindingConfiguration="noSecurityP2PBinding" name="p2pEndpoint"
contract="Schmicky.Cache.Contracts.Interfaces.ICacheBroadcast"
listenUriMode="Explicit">
</endpoint>
<endpoint address="local" binding="netNamedPipeBinding" bindingConfiguration="noSecurityPipeCofig"
name="pipe" contract="Schmicky.Cache.Contracts.Interfaces.ICacheQuery" />
<host>
<baseAddresses>
<add baseAddress="net.p2p://broadcastmesh/SchmickyCache/" />
<add baseAddress="net.pipe://SchmickyCache/" />
</baseAddresses>
</host>
</service>
</services>
You will note that I have two endpoints. The reason for this is that the P2P generally only allows for announcements and therefore one-way calls. This sort of threw a spanner into the works as I thought a cache wouldn’t be too good if you could only store and not retrieve.
Then I thought, well why not make a secondary full-duplex endpoint just for queries? That would work, and so I made use of the high-speed channel from NetNamedPipeBinding. It doesn’t matter that its local host only because both endpoints are talking to the local cache access point host. Yipee!
So with two endpoints it makes sense to map them to two different interfaces – one for storage and the other for cache query and retrieval:
[ServiceContract(ProtectionLevel = ProtectionLevel.None)]
public interface ICacheBroadcast
{
/// <summary>
/// Puts the specified item.
/// </summary>
/// <param name="request">The request.</param>
[OperationContract(IsOneWay = true)]
void Put(PutItemRequest request);
[OperationContract(IsOneWay = true)]
void Touch(string key);
}
[ServiceContract( ProtectionLevel = ProtectionLevel.None, SessionMode = SessionMode.Allowed)]
public interface ICacheQuery
{
[OperationContract(IsOneWay = false)]
GetItemResponse Get(GetItemRequest request);
}
When an item is stored it is broadcast to all other nodes in the mesh very quickly. Retrieving an item is immediate for the app storing it in the first place as there is no network latency. Remote nodes won’t know about it till it is broadcast there. When any node gets an item I send an asynchronous touch command to all other nodes to keep the item alive. .NET 4’s Task class is quite useful here.
The best thing about the P2P in WCF is that there is no looping’ing of sending updates to each node – its all encapsulated by WCF. Think UDP.
So now I have a working distributed cache and it only took me a day to do. Who said things are too complex?
Till next time.