webtech @ 30 June 2013, “No Comments”

My team has been doing alot of work with the Windows Kernel Cache lately, and in the process have come across some useful command lines to help manage and view the state of the kernel cache.  Since I couldn’t easily find (through either Bing or Google) any reference to these commands, I thought I’d take a few minutes to write some up in the hopes that crawlers will connect the dots for future searches.

While these commands are really only useful if you already know something about the kernel cache, I’ll provide a brief description of the system for those  (like me) that are too impatient to read the full MSDN article that I linked to earlier in this post.  The Windows Kernel Cache is a feature of the HTTP.SYS driver that allows the OS to cache objects in kernel memory, which then gets leveraged when HTTP.SYS receives incoming requests for the same URL of an object that is already in the cache.  This creates very fast HTTP responses by removing the need for the request to traverse up the HTTP stack into user space (as well as any application code such as IIS or <insert-your-own-httpsys-based-application-here>).

Now that the exhaustive intro on kernel cache is complete, lets get into those commands.   The intended audience for these is any developer building an application that leverages kernel cache, or an operations engineer working on a Windows Server with HTTP serving applications that leverage kernel cache.

  • netsh http show cacheparam – this command will show the two configurable paramaters of the kernel cache:
    • maxcacheresponsesize – HTTP objects below this size are eligible to be cached in kernel space, objects above this size are not.  For information about how to adjust the size (the default is 256kb), you can follow the instructions provided on MSDN.
    • cacherangechunksize – the size of chunks that the kernel cache stores for cache range requests. (NOTE: I’m not 100% clear on this one as MSDN documentation states that the kernel cache does NOT serve range requests, perhaps its a setting used by IIS-level caching)
  • netsh http show cachestate – this command will show the URLs of all objects currently in the kernel cache.  The following information is returned for each object that is in the cache:
    • Status code: the HTTP status code that will be sent to the client for requests served for this object.
    • HTTP verb: the verb that this cached object will be served for.
    • Cache policy type: the type of policy, which is one of:
      • time to live – objects with this setting use the TTL setting in the response to indicate when to purge the item from cache
      • user invalidates – objects with this setting wait until the object is explicitly cleared before purging
    • Creation time:  the creation time of the object itself
    • Request queue name: (NOTE: not sure about this one, may be related to IIS settings)
    • Content type: the content type of the cached object.
    • Content encoding: the content encoding (if any) of the cached object.
    • Headers length: the size of the headers associated with the cached object (NOTE: the kernel cache stores all the headers along with the object, and serves the headers along with the content for requests served out of cache)
    • Content length: the size of the content associated with the cached object.
    • Hit count: the number of requests served by the cached object.
    • Force disconnect after serving: TRUE or FALSE and indicates if the kernel cache will force a client disconnect after serving the object.

 

 

webtech @ 08 August 2011, “No Comments”

(warning: I’m probably either dating myself, or making myself look like a dolt, or possibly both….but here goes 🙂 )

In case you haven’t already read my about page (and if you haven’t, well I can’t say I blame you), I’ve been in the software business for *gulp* about 20 years now.  So my perspective is either that of a wise old sage, or of an old curmudgeon when it comes to what I perceive as the latest craze around JavaScript all over the place.  By “all over the place” I mean at each of the three tiers in a traditional three-tier architecture:

  • Client
  • Server
  • Data

Back in the old days (like 2 or 3 years ago), the only place people wrote alot of JavaScript was at the client, which made a ton of sense if you were going for reach over richness since anyone with a browser of reasonable capabilities could use your UI.  Over the last year or so, even the richness argument has started to fade with the advent of HTML5 and CSS3, which allow you do do all sorts of wizzy and responsive UI’s in the browser.  The penetration of these technologies hasn’t gotten to the point *yet* where a developer can just make the bet on JavaScript, but that will change in the next year or two (maybe even faster if your demographic is hip enough to predominantly use Safari or Chrome).  There’s also the little matter of JavaScript being the only game in town if you’re trying to write code for a browser unless you want to take the bet on Flash/ActionScript.

On the server, you could use JavaScript if you wanted, using something like cscript on Windows, but not alot of developers went that route since there were other languages (like C++, Python, Perl, C#) that were better integrated with the platform and arguably richer and certainly more performant.  Now that node.js is around and taking off, the integration + performance arguments for non-JavaScript languages has started to wane, so developers have another tool in the toolbox if you will.

And on the data layer, I’m not aware of any options (and maybe even desire) to use JavaScript for querying before things like MongoDB came around.  With Mongo, you can use JavaScript to query your data as well as do filtering and pivots on the data inside the DB itself.  The idea of running code inside the database process, for either filtering or map/reduce like functionality is a powerful one and similar in concept to what Microsoft added to SQL Server when they added .NET coding inside the SQL Server process.  What’s new with Mongo is the ability to use JavaScript, at least its new to me, maybe there were other database platforms that were supporting this before, but the point for this post is how JavaScript is permeating the other levels of an architecture, and since Mongo is rather popular these days, its a good example to consider.

Ok so back to the question…why is JavaScript getting such traction in more places lately?  It’s clearly not the only game in town on the server or in the data tiers, those layers have existed and exploded for years using other languages and tools.  It’s not for performance, since JavaScript, as an interpreted language, isn’t going to be faster than say C++.  This leaves me to think it’s because of either:

  • interoperability
  • usability

Let’s look at the usability case first, since it may be a quicker discussion.  Usability is largely a factor of individual developer preference, that’s one of the reasons why we have so many different programming languages around today.  Of course the usability case could be made for it on the server and data tier if you’re at a company and either already have some JavaScript programmers (who develop your web application client), or need to hire a bunch of JavaScript programmers to develop your web application client.  The premise here is that you already need to know JavaScript to write the client, so why not leverage the same skills on the server and data layers?

There’s a more interesting conversation to be had around interoperability.  Since there is lock-in for JavaScript at the client layer, and data serialization formats like JSON make passing data around alot easier, it’s natural that you would like to use the same language, and possibly even code, on the server and data layers to get the most efficiency out of your developers.  XML has long been thought of as the way to get data across systems in a platform agnostic manner, but the fact is that while XML is expessive, it’s also rather heavyweight and requires clients to do more parsing than when handling JSON (since JSON can just be eval’ed into a native JavaScript object).  On the server-side, there’s been more built-in library/language support but invariably it still requires more parsing (I think) because that data format -> native object translation isn’t guaranteed like it is with JSON.

So to summarize, it seems that “the fuss” about JavaScript all over the place is really about efficiency (where developers are the resources being optimized), which makes sense as long as projects that aren’t staffed with JavaScript programmers from the start, understand they’ll be making tradeoffs if they pick platforms like node.js if they don’t already have JavaScript programmers on staff.  Of course the good news is for JavaScript programmers, your skills should be even more in demand now 🙂

 

webtech @ 23 May 2011, “No Comments”

In my last post I talked a little about the development of the new HTML Streetside view on Bing Maps.   Thanks to our performance test team, I now have a great side-by-side video to show the actual start times of the two experiences, check it out:

 

Streetside v. Streetview from bert molinari on Vimeo.