Numbers Everyone Should Know

A quick comparison of different data-access times on a distributed system:

  • L1 cache reference – 0.5 ns
  • Branch mispredict – 5 ns
  • L2 cache reference – 7 ns
  • Mutex lock/unlock – 100 ns
  • Main memory reference – 100 ns
  • Compress 1K bytes with Zippy – 10,000 ns
  • Send 2K bytes over 1 Gbps network – 20,000 ns
  • Read 1 MB sequentially from memory – 250,000 ns
  • Round trip within same datacenter – 500,000 ns
  • Disk seek – 10,000,000 ns
  • Read 1 MB sequentially from network – 10,000,000 ns
  • Read 1 MB sequentially from disk – 30,000,000 ns
  • Send packet CA->Netherlands->CA – 150,000,000 ns

Seen on this talk from Jeff Dean (Google).

Obvious (or not!) and curious fact on raising events in .NET

Whenever I put an event on a .NET I follow the common pattern of having a dedicated method to raise it and making a copy of the event handlers chain:

private void OnXXXXX(XXXXXEventArgs args)
var tempHandler = this.XXXXX;
if(tempHandler != null) tempHandler(this, args);

This is to avoid the race condition that would be created if the XXXXX event member was accessed twice. In that case, between the null check and the invocation of the delegate chain some other thread could remove the last handler from the chain, making it null and causing an exception.

using the tempHandler variable above should avoid the issue since it is a local variable and delegates are immutable. The worst that could happen is invoking an handler that was meanwhile removed from the chain. However, as Jeffrey Richter points out on the section “Raising an Event in a thread-Safe Way” of CLR via C# (4th edition):

what a lot of developers don’t realize is that this code could be optimized by the compiler to remove the local variable entirely

Makes sense! But the pattern is so common that, as he says, most developers don’t even think about this issue. The correct approach is using a volatile read:

private void OnXXXXX(XXXXXEventArgs args)
var tempHandler = Volatile.Read(ref this.XXXXX);
if(tempHandler != null) tempHandler(this, args);

This way we ensure that the variable is read and probably also address some visibility issues. But wait… how’s it possible that the first approach is not causing problems to tons of developers?! This is the curious part. Richter says you can actually use the first approach because:

(…) the just-in-time (JIT) compiler is aware of this pattern and it knows not to optimize away the local temporary variable. Specifically, all of Microsoft’s JIT compilers respect the invariant of not introducing new reads to heap memory and therefore, caching a reference in a local variable ensures that the heap reference is accessed only once. This is not documented and, in theory, it could change, which is why you should use the last version. But in reality, Microsoft’s JIT compiler would never embrace a change that would break this pattern because too many applications would break.

In addition, events are mostly used in single- threaded scenarios (Windows Presentation Foundation and Windows Store apps) and so thread safety is not an issue anyway.

Interesting, hun?