webtech @ 30 June 2013, “No Comments”

My team has been doing alot of work with the Windows Kernel Cache lately, and in the process have come across some useful command lines to help manage and view the state of the kernel cache.  Since I couldn’t easily find (through either Bing or Google) any reference to these commands, I thought I’d take a few minutes to write some up in the hopes that crawlers will connect the dots for future searches.

While these commands are really only useful if you already know something about the kernel cache, I’ll provide a brief description of the system for those  (like me) that are too impatient to read the full MSDN article that I linked to earlier in this post.  The Windows Kernel Cache is a feature of the HTTP.SYS driver that allows the OS to cache objects in kernel memory, which then gets leveraged when HTTP.SYS receives incoming requests for the same URL of an object that is already in the cache.  This creates very fast HTTP responses by removing the need for the request to traverse up the HTTP stack into user space (as well as any application code such as IIS or <insert-your-own-httpsys-based-application-here>).

Now that the exhaustive intro on kernel cache is complete, lets get into those commands.   The intended audience for these is any developer building an application that leverages kernel cache, or an operations engineer working on a Windows Server with HTTP serving applications that leverage kernel cache.

  • netsh http show cacheparam – this command will show the two configurable paramaters of the kernel cache:
    • maxcacheresponsesize – HTTP objects below this size are eligible to be cached in kernel space, objects above this size are not.  For information about how to adjust the size (the default is 256kb), you can follow the instructions provided on MSDN.
    • cacherangechunksize – the size of chunks that the kernel cache stores for cache range requests. (NOTE: I’m not 100% clear on this one as MSDN documentation states that the kernel cache does NOT serve range requests, perhaps its a setting used by IIS-level caching)
  • netsh http show cachestate – this command will show the URLs of all objects currently in the kernel cache.  The following information is returned for each object that is in the cache:
    • Status code: the HTTP status code that will be sent to the client for requests served for this object.
    • HTTP verb: the verb that this cached object will be served for.
    • Cache policy type: the type of policy, which is one of:
      • time to live – objects with this setting use the TTL setting in the response to indicate when to purge the item from cache
      • user invalidates – objects with this setting wait until the object is explicitly cleared before purging
    • Creation time:  the creation time of the object itself
    • Request queue name: (NOTE: not sure about this one, may be related to IIS settings)
    • Content type: the content type of the cached object.
    • Content encoding: the content encoding (if any) of the cached object.
    • Headers length: the size of the headers associated with the cached object (NOTE: the kernel cache stores all the headers along with the object, and serves the headers along with the content for requests served out of cache)
    • Content length: the size of the content associated with the cached object.
    • Hit count: the number of requests served by the cached object.
    • Force disconnect after serving: TRUE or FALSE and indicates if the kernel cache will force a client disconnect after serving the object.

 

 

misc @ 21 April 2013, “No Comments”

Being from New England, the events of the last week have hit pretty close to home for me even though I’m 3000 miles away on the other side of the US.  I’m not one to write or talk alot about these sorts of situations, but after watching today’s Red Sox/Royals game, I wanted to save and share the link to the pregame video, its incredibly moving.

http://wapc.mlb.com/bos/play/?content_id=26426799&topic_id=44973348&c_id=bos

managing @ 16 September 2011, “No Comments”

While I’m nowhere near Captain Dan Berg status (in fact, I really have no clue about how to build a ship in a bottle, probably because I really don’t care much about ships in a bottle), I’m using the title here for two reasons:

  • It’s possibly catchy enough for someone to think there’s something really deep in this post
  • The real content of this post is inspired by a story I read that involved ships in a bottle (confused yet?)

Awhile back, like a decade ago, I remember reading some comments over on slashdot and there was one comment in particular that resonated with me (although not enough to really remember anything other than a butchered version of the original quote):

“I work at a software company for a manager that I love.  Whenever I walk by his office, he’s building ships in a bottle not coding, writing email or doing other manager-y things.  Yet his projects are always on track, his team is happy and he’s always available to chat and help with whatever I need”

My first thought after reading this was “man, I want to be that manager, as long as I can replace building ships in a bottle with watching baseball and eating meatball sandwiches!”.  This was before I was a manager so I was full of youthful snarkiness and figured this must actually be a pretty poor manager who somehow duped this guy into thinking he was competent.  Surely real managers should be coding away, writing design documents, triaging bugs constantly, etc!  But over time (both as an IC and a manager) I’ve learned to really appreciate (and in fact incorporate) the message behind this quote, which distills down to my personal philosophy when it comes to managing:

“Hire good people and let them work”

In my experience, the best managers are adept at identifying and recruiting top talent, and just as importantly constantly re-recruiting the team.  Focusing like a laser on people above projects and process is a winning formula in my experience.  I’ve been fortunate to work with some really tremendous developers over the years, passionate and smart, and there’s just no substitute for a great team.  And its important to think of the team holistically, not just as a collection of individuals.  Thinking holistically forces a manager to ensure the team is balanced (in terms of both experience and passion) and gets along well together.  It can take time for a team to gel, but once it does it’s an amazing site to see…and it gives the manager more time for meatball sandwiches 🙂

webtech @ 08 August 2011, “No Comments”

(warning: I’m probably either dating myself, or making myself look like a dolt, or possibly both….but here goes 🙂 )

In case you haven’t already read my about page (and if you haven’t, well I can’t say I blame you), I’ve been in the software business for *gulp* about 20 years now.  So my perspective is either that of a wise old sage, or of an old curmudgeon when it comes to what I perceive as the latest craze around JavaScript all over the place.  By “all over the place” I mean at each of the three tiers in a traditional three-tier architecture:

  • Client
  • Server
  • Data

Back in the old days (like 2 or 3 years ago), the only place people wrote alot of JavaScript was at the client, which made a ton of sense if you were going for reach over richness since anyone with a browser of reasonable capabilities could use your UI.  Over the last year or so, even the richness argument has started to fade with the advent of HTML5 and CSS3, which allow you do do all sorts of wizzy and responsive UI’s in the browser.  The penetration of these technologies hasn’t gotten to the point *yet* where a developer can just make the bet on JavaScript, but that will change in the next year or two (maybe even faster if your demographic is hip enough to predominantly use Safari or Chrome).  There’s also the little matter of JavaScript being the only game in town if you’re trying to write code for a browser unless you want to take the bet on Flash/ActionScript.

On the server, you could use JavaScript if you wanted, using something like cscript on Windows, but not alot of developers went that route since there were other languages (like C++, Python, Perl, C#) that were better integrated with the platform and arguably richer and certainly more performant.  Now that node.js is around and taking off, the integration + performance arguments for non-JavaScript languages has started to wane, so developers have another tool in the toolbox if you will.

And on the data layer, I’m not aware of any options (and maybe even desire) to use JavaScript for querying before things like MongoDB came around.  With Mongo, you can use JavaScript to query your data as well as do filtering and pivots on the data inside the DB itself.  The idea of running code inside the database process, for either filtering or map/reduce like functionality is a powerful one and similar in concept to what Microsoft added to SQL Server when they added .NET coding inside the SQL Server process.  What’s new with Mongo is the ability to use JavaScript, at least its new to me, maybe there were other database platforms that were supporting this before, but the point for this post is how JavaScript is permeating the other levels of an architecture, and since Mongo is rather popular these days, its a good example to consider.

Ok so back to the question…why is JavaScript getting such traction in more places lately?  It’s clearly not the only game in town on the server or in the data tiers, those layers have existed and exploded for years using other languages and tools.  It’s not for performance, since JavaScript, as an interpreted language, isn’t going to be faster than say C++.  This leaves me to think it’s because of either:

  • interoperability
  • usability

Let’s look at the usability case first, since it may be a quicker discussion.  Usability is largely a factor of individual developer preference, that’s one of the reasons why we have so many different programming languages around today.  Of course the usability case could be made for it on the server and data tier if you’re at a company and either already have some JavaScript programmers (who develop your web application client), or need to hire a bunch of JavaScript programmers to develop your web application client.  The premise here is that you already need to know JavaScript to write the client, so why not leverage the same skills on the server and data layers?

There’s a more interesting conversation to be had around interoperability.  Since there is lock-in for JavaScript at the client layer, and data serialization formats like JSON make passing data around alot easier, it’s natural that you would like to use the same language, and possibly even code, on the server and data layers to get the most efficiency out of your developers.  XML has long been thought of as the way to get data across systems in a platform agnostic manner, but the fact is that while XML is expessive, it’s also rather heavyweight and requires clients to do more parsing than when handling JSON (since JSON can just be eval’ed into a native JavaScript object).  On the server-side, there’s been more built-in library/language support but invariably it still requires more parsing (I think) because that data format -> native object translation isn’t guaranteed like it is with JSON.

So to summarize, it seems that “the fuss” about JavaScript all over the place is really about efficiency (where developers are the resources being optimized), which makes sense as long as projects that aren’t staffed with JavaScript programmers from the start, understand they’ll be making tradeoffs if they pick platforms like node.js if they don’t already have JavaScript programmers on staff.  Of course the good news is for JavaScript programmers, your skills should be even more in demand now 🙂

 

Over the past few months I’ve gotten increasingly interested in REST services. Maybe it’s the beautiful simplicity of the concept, or the fact that I can get obsessive about things like URL schemas and HTTP verb usage, but the technology has really gotten my attention. In addition to spending way too much time reading about the merits of POST v. PUT when uploading content, or when you should (or shouldn’t) put the version number of your API in the URL, I’ve also gotten into in authentication and authorization of requests.

Software security in general has always been a topic that interest me enough to want to read up and self-educate, but I’ll also admit to never being too interested in all the gory details. So while working on the REST API that I’ve been playing with at work, I was hoping to find something that would satisfy the following authentication/authorization requirements:

  • making sure the calling user is who they say they are
  • having some piece of data that uniquely identifies the calling user
  • supporting multiple authentication services (Live ID, Yahoo, Google, etc)

I couple of months ago I did some initial reading on OAuth which looked rather promising, but then when attending a session on OAuth at Web 2.0 in San Francisco, I go scared away from the technology when the speaker said something to the effect of “OAuth should be killed”, not a ringing endorsement from someone who was much more knowledgeable about the technology. I looked into Facebook Connect a little, then someone at work told me about Windows Azure Access Control Service (ACS). A quick read through some docs on the codeplex site and other sources on the web had me intrigued. Not being one to read lots of documentation (or instructions in general) I jumped into trying to integrate ACS into my REST API, which comprises of read/write operations for files and has some minimal user management capabilities too.

The results are promising.  Once I got my head wrapped around the various components and their levels of interaction (which are described in the sequence diagrams on the codeplex site), the integration with my service wasn’t too painful although I did find that the web examples were a little lacking, specifically around browser (HTML/Javascript) based clients interacting with a site or service that is using ACS for authentication + authorization.  After some experimenting, I got the whole thing work by doing the following (I may end up posting the code at some point if there’s interest, but hopefully a description is good enough for starters):

  • Creating a new Access Control Namespace (done via the Azure Management site)
  • Configuring the new Access Control Namespace for my service
    • Picking the identity providers that I wanted to use (Live ID, Yahoo and Google)
    • Adding my service as a Relying Party Application
    • Configuring my service’s Relying Party Application settings, which includes:
      • Setting the Realm and Return URL’s (I used the default.aspx page for both, this is where ACS will re-direct the browser to upon completion of authentication via the identity provider and authorization token minting via ACS).
      • Setting the Token format (I used SWT since it’s a little more web-friendly due to it being plain text and not XML)
      • Setting the Rule groups (this is where I tell ACS what I want it to add to the SWT token that comes back to my service upon successful authentication)
      • Setting the Token Signing Key (will need this in my application to enable it to decrypt the SWT token that ACS passes back upon successful authentication)
  • Adding a link from my site to an ACS hosted page that allows the user to select their preferred authentication provider and then re-directs the user to the providers login page.  The ACS management page generates this link in the Development -> Application Integration page, just copy/past the link into a page on your site and ACS does the rest.
  • Adding parts of the shared code (that’s in the  Management\ManagementService\Common directory of the downloadable sample code from codeplex)
    • Gets SWT token in the incoming request
    • Saves the SWT token in cookie to make it accessible to subsequent requests on my service
    • Checks for authorization status in the SWT token, a token contains valid authorization status if it has:
      • The same HMAC signature that was generated by ACS when it was encrypted (this is where my service code needs the Token Signing Key)
      • Non-expired data
      • The issuer is trusted (this is configured in my service code)
      • The audience is trusted (this is configured in my service code)

With all of this, I have a WCF REST service that leverages ACS as the authentication/authorization provider.  I have the code setup to allow un-authenticated reads but require authentication + authorization on writes.  I can let users use either of Live ID, Google or Yahoo for login support (to minimize the chances of needing to sign up for a new login account) and can get the user’s email address, which I use as a means of identifying users internally, from the SWT token that comes back, pretty cool stuff!

webtech @ 23 May 2011, “No Comments”

In my last post I talked a little about the development of the new HTML Streetside view on Bing Maps.   Thanks to our performance test team, I now have a great side-by-side video to show the actual start times of the two experiences, check it out:

 

Streetside v. Streetview from bert molinari on Vimeo.

Last week, my team at Microsoft released an HTML4 Streetside experience for Bing Maps.  Before this release, users on Bing Maps needed to have Silverlight installed to view our Streetside imagery, and with this release any user with a relatively modern browsers (IE7+, FF3.5+, Chrome 9+ and Safari 5+) can use the viewer without any additional installs. In addition to removing the need for an install, we’ve also dramatically increased our startup times over both the Silverlight experience and our major competitor (ok. really the 800lb gorilla) in this space, Google’s StreetView.  In our performance test labs, we’ve seen startup times improve by over 50% in the same scenarios using either the Bing Maps Silverlight viewer, or the Google Maps Flash viewer.  Since mapping in general is such a task based application, it’s important to have features that are not only functional but fast, which was a big driver for us in moving to HTML and even after that, spending time tuning the client/server interactions as well as the client-side JavaScript to feel quick and responsive.

There are two things that really stand out to me when I look back at this project.  First and foremost (as an engineer) I’m really excited by the strides we’ve made in performance.  We did it with some really great collaboration with the Bing Maps team, sharing a bunch of core code for things like map tile fetching + rendering, which helps us align overall look/feel of the Streetside experience with what the Maps team has done on the bing.com/maps site.  We’re also sharing code investments in other areas, like inertial scrolling, which is required for a good user experience in Streetside, and now is available when viewing top-down maps too.  And speaking of sharing, we were also able to make use of the same data that we created for the iPhone Streetside experience (available in Bing App for iPhone), we picked the right data and serialization formats last winter which enabled faster delivery of the HTML experience last week.

In addition to the engineering work, we also made a concerted effort to release an innovative + useful experience for our users.  We’re really looking forward to hearing feedback on this feature from our users since we fully admit there can be some controversy here as we made tradeoffs (as virtually any engineering project must do), around performance, user experience, user reach and innovation.  There’s certain to be some that are unsure of the new experience compared to what Bing Maps Streetside and Google Maps StreetView have delivered in the past, and we understand that, we’ve understood it all along actually :).  One of, if not the, core scenario for human scale mapped media is the ability to really see what a place looks like, and see it in context with its surroundings.  This is useful in lots of places, like finding a friend’s house or finding that restaurant that you’re meeting someone at.  Since our Streetside coverage is highly concentrated around core urban areas, we made the decision to try the planar panorama experience v. the cube mapped panorama experience since in those urban cores, the planar panorama experience looks quite good and gives us a lot of real estate at the bottom of the viewer to display information about businesses and transit stops that are on the street.

Of course ultimately it will be user behavior that determines how successful the experience is, we’ll be watching closely to see what everyone thinks!

For this post,  I’m going to write about infrastructure and it’s value (both positive and negative) in the lifecycle of developing software.  My goals are to write this in a way that applies to virtually any project, large or small across a divers set of technologies, but ultimately I’ll be speaking from my most recent experiences which are with reasonably large scale (100s of developers) projects.

Since the term can easily mean different things to different people, I’ll start with defining the term…wikipedia is always a good place to start, so here’s their entry for the word (in the generic sense, not necessarily software specific):

Infrastructure is the basic physical and organizational structures needed for the operation of a society or enterprise,[1] or the services and facilities necessary for an economy to function.[2] The term typically refers to the technical structures that support a society, such as roads, water supply, sewers, electrical grids, telecommunications, and so forth. Viewed functionally, infrastructure facilitates the production of goods and services; for example, roads enable the transport of raw materials to a factory, and also for the distribution of finished products to markets and basic social services such as schools and hospitals.[3] In military parlance, the term refers to the buildings and permanent installations necessary for the support, redeployment, and operation of military forces.[4]

If we convert that to “software speak” we can bucket at least file the following as infrastructure (note: there may be more, just going for the obvious ones here):

  • Build hardware and software – all the tools + scripts, and the hardware they run on, that are required to take your code and turn it into a deployable package (i.e. a binary for a phone app, or a collection of Javascript and PHP files for a web app).
  • Deployment hardware and software – all the tools + scripts, and the hardware they run on, that are required to get your deployable package and, well, deploy it onto representative hardware.  Note that I’m being careful not to assume we’re just talking about web sites and web services here, deployments can go to phones or other non-server hardware too.
  • Test hardware and software – all the tools + scripts, and the machines they run on, that are required to test your deployed packages.
  • Reporting hardware and software – ultimately all of the above need a place to record results, both to allow for diagnostics, but also detecting trends over time, this all falls into your reporting infrastructure.
  • Source control and work item tracking hardware and software – projects of any reasonable complexity need a way to track bugs and or tasks (many smaller teams track the latter in more Agile ways), as well as source code.

That looks like a good list to start with, so now the onto the “value” part of the post.  Why do software development projects need all this stuff?  Why do we spend time and money on infrastructure when in the abstract, it doesn’t all add direct value to the end product? The value of infrastructure in software, is just like the value of infrastructure in the broader sense (using the definition from Wikipedia above)...it allows us to do our jobs, and poor infrastructure can often prevent us from doing our jobs.  I will readily admit that there’s a point of diminishing returns with infrastructure, but at least in my experience (which spans working at large and small companies) we too frequently forget how valuable good infrastructure is.  For example, imagine the scenario where you have a team of say 6 developers, all working on a feature that involves:

  • Developing a database to store data (aka data tier)
  • Developing a service to retrieve the data over HTTP (aka services tier)
  • Developing a web page to render the data in a browser (aka client tier)

Sounds pretty simple and the kind of system that many people work on everyday.  A feature that involves enough complexity to require 6 developers to work on it will need some coordination and lots of testing, so we’ll need a way to coordinate the work across the team.  This coordination needs to happen both inside a tier (so two or more devs can work on the data tier simultaneously) and across tiers (so two or more devs can work on client/services interactions).  There’s more than one way to solve this problem but typically you’d want some infrastructure (as defined above) to allow both the collaboration within a tier and integration across tiers.  If everything (builds, testing, deployment) is working great, the devs can happily check in their code, wait a short period of time for it to build, wait another short period of time for it to pass testing and then have it automatically deployed to a server so others can test and/or integrate with their changes.  Since there are several stages in the infrastructure pipeline and we already know that context switches are expensive, it behooves the business to have that pipeline execute both expediently and reliably.  If it’s slow, developers will go and do something else while they wait, and the cost of getting back into both the original task as well as the task they get into later goes up the longer it takes to finish.  If it’s unreliable, developers will have to spend time troubleshooting and fixing the infrastructure when they could be writing code that directly adds value to the business/feature.

Since most developers don’t work for free, what would you rather have your team doing, writing code for your feature, or waiting on slow infrastructure or troubleshooting unreliable infrastructure?