Released RequestReduce 1.7.0: Giving the RequestReduce onboarding story a happy beginning by Matt Wrock

About six weeks ago I blogged about an issue with RequestReduce and its limitations with resolving image properties of each CSS class. To recap, until today, RequestReduce treated each CSS class as an atomic unit and ignored any other classes that it may be able to inherit from. The worst side effect of this is a page that already has sprites but uses one class to specify the image, width, height, and repeatability. Then uses several separate classes each containing the background-position property of each image in the sprite sheet. Something like this:

.nav-home a span{    display:block;    width:110px;    padding:120px 0 0 0;    margin:5px;float:left;    background:url(../images/ui/sprite-home-nav.png?cdn_id=h37) no-repeat 0 1px;    cursor:pointer}

.nav-home a.get-started span{background-position:0 1px}

.nav-home a.download span{background-position:-110px 1px}

.nav-home a.forums span{background-position:-220px 1px}

.nav-home a.host span{background-position:-330px 1px}

What RequestReduce would do in a case like this is resprite .nav-home a span because it has all of the properties needed in order to construct the viewport and parse out the sprite correctly. However, once this was done, the lower four classes containing the positions of the actual images rendered a distorted image. This is because RequestReduce recreated a new sprite image with the original images placed in different positions than they were on the original sprite sheet. So the background positions of the other nav-home images point to invalid positions.

If you are creating a site that pays tribute to abstract art, you may be pleasantly surprised by these transformations. You may be saying, “If only RequestReduce would change my font to wing dings, it would be the perfect tool.” Well, unfortunately you are not the RequestReduce target audience.

RequestRecuce should never change the visual rendering of a site

One of the primary underlying principles I try to adhere to throughout the development of RequestReduce is to leave no visible trace of its interaction. The default behavior is always to optimize as much as possible without risk of breaking the page. For example, I could move scripts to the bottom or dynamically create script tags in the DOM to load them asynchronously and in many cases improve rendering performance but very often this would break functionality. Any behavior that could potentially break a page must be “requested” or opted in to via config or API calls.

This spriting behavior violated this rule all to often. I honestly did not know how wide spread this pattern was. My vision is to have people drop RequestReduce onto their site and have it  “just work” without any tweaking. What I had been finding was that many many sites and most if not all “sophisticated” sites already using some spriting render with unpleasant side-effects when they deploy RequestReduce without adjustments. While I have done my best to warn users of this in my docs and provide guidance on how to prepare a site for RequestReduce, I had always thought that the need for this documentation and guidance would be more the exception than the rule.

I have now participated in onboarding some fairly large web properties at Microsoft onto RequestReduce. The process of making these adjustments really proved to be a major burden. Its not hard like rocket science, its just very tedious and time consuming. I think we’d all rather be building rockets over twiddling with css classes.

Locating other css properties that may be used in another css class

It just seemed to me that given a little work, one could discover other properties from one css class that could be included in another. So my first stab at this was a very thorough reverse engineering of css inheritance and specificity scoring. For every class, I determined all the other classes that could potentially “contribute” to that class. So given a selector:

h1.large .icon a

Dom elements that can inherit from this class could also inherit from:

a.icon ah1 .icon a.large .icon a.large aetc...

For every class that had a “transformable” property (background-image or background-position), I would iterate all other classes containing properties I was interested in (width, height, padding, background properties) and order them by specificity. The rules of css specificity can be found here. Essentially each ID in a selector is given a score of 100, each class and pseudo class a score of 10 and each element and pseudo element a score of 1. Inline styles get a score of 1000, but I can’t see the dom and the “Universal” element or * is given a score of 0. Any two selector with a matching score determines its winner by the one that appears last in the css.

Once I had this sorted list, I would iterate down the list stealing missing properties until all my properties were occupied or I hit the end of the list.

At first this worked great and I thought I was really on to something but I quickly realizing that this was breaking experience all too often. Given the endless possibilities of dom structures, there is just no way to calculate without knowledge of the dom, which class is truly representative. Eventually I settled on only matching up a background image less selector with a background-property with the most eligible and specific selector containing a background-image. While even this strategy could break down,  so far every page I throw this at renders perfectly.

Although this feature does not add any optimization to other sites and only assists larger sites to RequestReduce, I’m excited to provide a smoother adoption plan. As a project owner who wants his library to be used, I want this adoption plan to be as smooth and frictionless as possible.

What else is in v1.7.0?

Here is a list in the other features that made it into this release:

  1. Improved performance of processing pages with lots of sprites. This is done by loading each master sprite sheet into memory only once and not each time an individual sprite image is found.
  2. Prevent RequestReduce from creating an empty file when it processes a single script or css file containing only a comment. After minification, the file is empty.
  3. Support Windows Authentication when pulling script/css/sprite resources.

What’s Next?

Good question. Probably being able to process scripts due to expire in less than a week. Soon after that I want to start tackling foreground image spriting.

Reflecting on two years as a Microsoft employee by Matt Wrock

So its New Years Day and I’m thinking maybe its appropriate to write a post that’s deep and introspective. Something that speaks to a broad audience and asks the reader to stop, reach deep within. Real deep. Ok even deeper…deeper still. Wait. Uh oh we’ve gone too deep now. Pull back. Further. Keep going. Ugh…now I’m just tired.

Anyhoo, I really have been wanting to write about the things I have learned since joining Microsoft. Things I have learned about working at Microsoft in general and things I have learned about software engineering. So I’ll start with some observations about my employee experience at Microsoft and then get a bit more technical talking about practices I have learned and found valuable. These are not necessarily “Microsoft practices” but just things I have learned by working with a new team and new people.

This dovetails nicely into my first point:

One Microsoft Way is just an address

Its very common to hear non Microsoft employees say things like “Microsoft employees are…” or “That’s very typical Microsoft.” People often  think of Microsoft as an organization that acts in one accord and that all Microsoft employees, managers and practices can be codified within a single set of characteristics, values and practices. I largely subscribed to this perspective before becoming employed at Microsoft.

The reality is that Microsoft is like a collection of many small to mid sized companies and each can have dramatically different practices and employee profiles. There are teams that follow a variety of practices from traditional waterfall to “scrumerfall” to text book scrum to strict TDD and pair programming XP like disciplines. There are teams where everyone works in their own office, others that always work in an open team room and others that work sometimes in a team room but have their own office if they want some “alone time.” There are teams who follow strict policies prohibiting the use of any Open Source Software and others who actively seek out OSS to incorporate in there code base and others who look for opportunities to open source their own projects.

You have microsofties that will only ever use Microsoft tools and others carrying around IPhones and wearing Chrome T-Shirts. I think this is an important fact to keep in mind and most likely typical of other large companies. I think perhaps15 years ago, the culture may have been more homogeneous but it is far from that now. There is a lot going on behind Microsoft source control that would surprise a lot of anti MS geeks.

Per Capita, Microsoft engineers are the brightest and most passionate I have ever worked with

Honestly, this has been both a curse and a blessing for me but by far mostly a blessing. Overall, the caliber of the engineers that I work with is higher than the startups I have worked with in the past. I had been used to easily obtaining the role “rock star” developer at previous gigs. This is not because I am particularly smart or clever. Far be it. I just work incredibly hard (too hard) and really really like what I do. Being around so many great developers was difficult to adjust to because it blends poorly with my self conscious nature. Its normal for me to go through a 3 month period of “Oh shit, I’m gonna get fired today” and this period was dramatically prolonged at Microsoft.

The flipside is that I get to come to work everyday and blab for long periods about technology and developing practices and disciplines with others who are equally if not more informed and enthusiastic. I am constantly learning new technical tidbits and insightful disciplines and interesting ways of looking at problems.

There is a “Microsoft Bubble” and it must be popped

This may appear to contradict my first point. I still stand by my statement that the Microsoft employee population and vast array of different business units cannot be pigeon holed. However, I have found a surprisingly strong tide of what some call “old Microsoft.” What is Old Microsoft? Well I‘ve only been there two years so I can’t speak with much authority here. Some think it’s a group of grey haired and clean shaven engineers hunkering down in the basement of Building one, writing UML diagrams and a huge Gnat chart behind a technology to bring down  all continuous integration servers, DVCS repositories and instances of Fire Bug. I’m keeping backups of all of these in case this is true.

In all honesty, “old Microsoft” to me is waterfall processes, large, monolithic architectures and a “not created here” mentality. What is interesting to me is that this is not an active tide striving to beat down any ideology opposed to it. Rather, its sheer ignorance resulting from a simple lack of awareness of what goes on outside of Microsoft. I have noticed that especially among the upper ranks, it is infrequent to see outsiders recruited. There are a lot of very seasoned engineers who have been in the industry for years and years and almost every one of those years have been at Microsoft. Some of these individuals simply have not had exposure to other organizational practices and have grown comfortable with what they have practiced for years.

These are not evil people. They are smart and simply need to be educated.I need to be educated. We all need education every day and a diverse one at that. If others do not approach this “old guard” and introduce them to evolving and progressive practices, because they are intimidated or are afraid of demonstrating a lack of company loyalty, it is mainly Microsoft who will suffer. Fortunately there are some very influential and some not so influential folks doing just this. As a result you are seeing more groups releasing earlier and more often and using things like OSS instead of the stock MS tools, and using tools like Mercurial instead of TFS. These engineers are more loyal to quality than they are to the Microsoft brand. I believe it is employees like these that an employer should seek out. An employee loyal to quality and engineering efficiencies should never be perceived as undermining Microsoft interests but rather raising the bar to higher standards and continuous improvement.

Some Valuable Technical Practices

Here are some purely technical practices I have picked up by working with my team over the past couple years. As I said before, these are not practices unique to Microsoft, but are simply a collection of new tools learned like I have learned from any other new team.

Test Driven Development: It supports and is not opposed to rapid development

For years before joining Microsoft I had been highly intrigued by TDD (Test Driven Development). It was a practice I truly believed in and wanted to master. Unfortunately I was too mired in managerial responsibilities to really master it and teach it to others. Also, TDD is one of those practices that is difficult to learn on your own and it is very easy to adopt anti patterns without knowing it. One great example is understanding the differences between unit tests and integration tests. If you look at the “unit tests” written by someone without any guidance, these tests are often actually integration tests and not unit tests at all. Developers end up frustrated writing these tests because they are fragile, time consuming and difficult and sometimes nearly impossible to write before writing implementation code. A typical response you hear from developers struggling to adopt TDD is something like, “we tried it but we ran out of time” or “management did not want us spending the time writing tests.”

Having lived in a strict TDD culture and learning a lot of the tricks and principles of true TDD, I now see that I don’t have time NOT to write unit tests. Yes, I do believe that unit tests will make V1 longer to develop. Some may disagree. But with each new version or release, unit testing incrementally increases the velocity of getting new features to market. When done right, unit tests are the safety harness any new team member needs when adding code to an existing codebase. I have worked with code bases that had parts of code that developers were scared to death to touch for fear of breaking some functionality that they were not aware of. On the other hand, if there is good test coverage, I can be fairly confident that if I break existing functionality, tests will fail and alert me to this fact.

One of the key leanings for me about TDD was the principle of testing ONLY the functionality of the method being tested. Too often, tests try to test all the way down the stack and you end up with a lot of tests repeating one another. The tests take longer to write and take much longer to refactor when refactoring code. Learning to mock or fake underlying services here is key. If you have a MVC action method that writes to a logging service, there is no need to test the logging in the tests built around the action method. You do that in the tests you write for the logging service.

Perhaps above all, the virtue of TDD is that it imposes a requirement of designing decoupled code, each component having as few responsibilities as possible. That’s just good design. However I will say as an OSS contributor to a code base with no dedicated test engineers, I am indebted to the QA virtues as well.

Always create a Build.bat file that can setup, build and deploy your entire app

This has been so incredibly helpful. I never create anything either at work or on my own without this. This bat file is probably a very small batch script that invokes a much larger power shell script ensuring that the build can be launched from both powershell as well as an old school DOS command console. As powershell gains momentum, the batch file may not be necessary. I plan to devote a future post to this topic alone.

This script should ideally setup ALL dependencies including IIS site configuration, database creation, installation of any services, certificates, etc. All of this can be done via power shell and other command line tools. If there is something that has to be manually setup, I would argue that you either do not fully understand the scripting capabilities of powershell or the technology you need to configure or you are using a technology that you should not be using.

This script (or collection of scripts) should also be able to package all build artifacts. Typically this is the script invoked by your continuous integration system.

The script may take some time to write, but it will pay off fast and will drastically reduce the time needed for onboarding new team members.  You no longer have to sit with a new dev and hold their had to get an app up and running or hand them a 20 page document that is old and has some frustrating mistakes. The script itself is now your build and deployment document. If the build script fails, it means no new code is written until it is fixed.

For an example of what such a script looks like in the wild, check out Build.bat in my RequestReduce Github Repo and follow it through to its powershell script. I use PSake to manage the build. I like its native support for powershell over MSBuild or NANT but those will work too,

Use DVCS for source control

The biggest downside to using a DVCS (Distributed Version Control System) like Mercurial at Microsoft is that if you ever find yourself on a team that uses TFS, and inevitably you will, you will find yourself craving to dine on broken glass in an effort to mask the pain of bad merges, locked files and a less than poor disconnected user experience.

My team uses Mercurial because it is more Windows friendly than GIT. I do use git for my personal project RequestReduce. I’ll admit than when I was migrating from SVN to Mercurial 18 months ago, there were some learning curve issues.I often find that it takes a while for DVCS newbies to “get it.” But once they do, they will swear by it. Lets face it, this is the future of source control.

From a install and setup perspective, the comparison between Mercurial or git to TFS is stark indeed. TFS is a monster (and not one of those cute and friendly monsters in Monsters Inc.) and Mercurial or git is refreshing to setup in comparison (like a young Golden Retriever puppy – so small and cute you just want to spoon it – perhaps I have said too much here). However, this is the least of the benefits of DVCS. To me the true beauty is having a complete local repository that I can commit to, branch and merge all locally. Also the complete history of the repo including all activity both local and remote is stored locally. I can play and merge and commit ar granularly as I want without disturbing or being disturbed by others. And merging is rock solid. I get far fewer merge conflicts than I ever had on SVN or other weirdness on TFS.

My other big tip here is learn the command line. This may seem daunting if you are used to a GUI but once you learn it, I assure you that you will have a better understanding of what it is doing and how it works which can be very important. Also, even to an appalling typist like myself, it is much faster.

Microsoft: A far from perfect great place to work

To sum up my time so far at Microsoft, its been a really good place to work. Overall it has been a very positive experience and I have no regrets. Its like working in a technical play ground. Ever seen a young kid in a big play ground? That’s me when I realize I get to build apps for a day. However its far from perfect. Working for Microsoft had been a long time dream which I had long idealized. I had spent my career in startup mode and always wondered how outfits like Microsoft handled things like data center deployments and recruiting. I had always assumed that they probably handled those things in a far superior manner than I ever had. Turns out there are a lot of kinks here even at Microsoft. Probably because they are really hard things to get right. There are times when I laugh at how much I idealized and even romanticized Microsoft. Its run by a bunch of humans like most other companies and humans can only be so right so often.

My next place of employment will be run by aliens. That should really take things to the next level.

Comparing RequestReduce with other popular minifiers by Matt Wrock

I have been asked several times now how RequestReduce compares or is different to such popular minification and bundling solutions like squishit, cassette and the upcoming Asp.Net 4.5 Minification and Bundling features. Before I say anything let me comment that RequestReduce is an OSS project and I make no money from this project and in fact lose quite a bit of time to it. This comparison is not at all intended to make a statement that the other solutions out there suck and therefore you should use my super cool Wrock star solution. The solutions currently out there are all great tools written by great developers. Also, I am a Microsoft employee and do not in any way wish to compete with my employer. I am nothing but supportive of the asp.net team’s progress in enhancing performance out of the box with ASP.NET 4.5. That all said, RequestReduce does take a unique approach to bundling and minification that I want to point out in this post.

Automatically Discovers CSS and Javascript with no code or config

One of my primary objectives with RequestReduce is to have it optimize a website with absolutely no coding intervention. My philosophy is that a developer or team should not have to adjust their style or conventions to work with an auto minifying tool. Currently most of the popular tools require you to either inject code or a control into your asp.net page or MVC view to act as the touch point that defines what should be minimized.

Being able to avoid adding suvh code is obviously ideal for legacy apps where you might not even have the ability to change code or have no idea where to begin. I also like it for green field projects. I just don’t think that a tool like RequestReduce should have a noticeable presence in your code.

RequestReduce uses a response filter to scan your response for all <link> tags in the head and all <script> tags in the page. As long as the href or src points to a url that returns a css or javascript content type, it will be processed by RequestReduce. The exception to this rule is javascript with no-store or no-chache in their response or that expire in less than a week. RequestReduce ignores those. Also, RequestReduce, by default, ignores javascript pulled from the google or Microsoft CDNs. The idea there is that such content has a high likelihood of already being cached on the user’s browser. RequestReduce does expose configuration settings and an API to give more fine tuned control of what CSS and Javascript to filter.

Minifies and Combines External and Dynamic Content

Most of the popular minification and bundling solutions are file based. In other words, they pull down the original unminified resources via the file system and assume everything is already on your server. While this obviously covers most cases it does not cover external scripts or things like WebResource.axd and ScriptResource.axd which are generated dynamically.

RequestReduce is HTTP based. It pulls down original content via http which means it can pull down any css or javascript as long as it is publicly available from a url. This is great for a lot of blog and cms systems that heavily rely on Webresources and scriptresources. It is also great for external content. Now, as started above, RequestReduce ignores “near future” expiring scripts. However, toward the top end of my backlog is a feature to handle those. Imagine being able to include those pesky social media scripts.

Automatically Sprites CSS Background images

Anyone who has created sprite sheets from scratch knows how tedious that process can be. As a site adds images on new releases, those sprite sheets have to be revised which has an engineering cost and a risk of being forgotten. Ask your engineering team who wants to do the spriting and don’t expect a huge show of hands. RequestReduce parses the CSS and looks for images that it thinks it can sprite and then generates the sprite sheets on the fly.

There are limitations in what RequestReduce will find and potential to distort the page rendering in some cases when images are already sprited. Much of that can be easily mitigated. Please see this wiki and also this one for hints and explanations on how to improve and optimize the RequestReduce spriting experience. The very next feature I am working on should alleviate a lot of the mess that can sometimes occur with a fraction of sites that already have sprites and will also allow RequestReduce to sprite even more images. I have a few upgrades planned to address sprites. I also plan to address spriting foreground images. How cool would that be?

Deferred Processing

RequestReduce never blocks a request while waiting to minify and bundle resources. If RequestReduce has not already done the minification and bundling, it will send the original response and queue the resources for processing. In the case of RequestReduce this is particularly important since the spriting can be quite costly. Once resources have been processed, all subsequent requests for those resources will serve the optimized content using optimized caching headers and etags.

SqlServer Content Synchronization and easy integration with CDNs and Cookie-Less domains

RequestReduce allows you to easily configure an alternate hostname where you would like requests for static resources to be sent. This works great for CDNs and cookie less domains and it supports web performance best practices.

Also, since RequestReduce can synchronize optimized content via sql server, it becomes an ideal solution for many web farm implementations. A common problem in a web farm scenario is a request for the base page provides Urls for scripts and CSS that point to the optimized files. Then a different server receives these requests and if those resources have not been processed yet on that server, a 404 can ensue. This can also be handled with a common static file share. See this wiki for more into on this.

Now a lot of the current solutions out there do provide integration points for you to extend their processing and plug in these kinds of features into their frameworks. RequestReduce attempts to provide these features out of the box.

Why not just do all of this at Build Time?

This is another common and somewhat related question I get. On the one hand, I totally agree. In fact in probably most scenarios out their on the net today, a build time solution will suffice. Most sites don’t deal with dynamic or external content which are the areas where a build time solution simply won’t work. A build time solution also imposes a lot less risk. There are no extra moving parts running in production to minify and bundle your code that can break. If this breakage interferes with your sites ability to serve its css and Javascript ,the results can be akin to total down time. Also, with a build time solution, you know exactly what is going into production and your test team can confidently sign off on what they tested.

I intend to eventually add features to RequestReduce to provide a better build time experience. To me, the beauty of a run time solution is not having to worry about declaratively adding new resources to the build tasks. As long as the tool is stable, I can have confidence that the new resources (images, css and javascript) will get picked up in the optimization. Also, the potential for optimizing external resources can potentially be huge. There is a fair amount to be done here to fully leverage this potential but it is a fact that much of a web’s performance degradation can be blamed on resources served from external sites.

I really hope this answers many of the questions about what makes RequestReduce different from other similar tool. Please do not hesitate to ask for more clarification in the comments if it does not or if you feel like I have missed anything significant

Setting Response.Filter after Response.Redirect can cause a Filtering is not allowed HttpException by Matt Wrock

I ran into this error on a RequestReduce bug and there was not a whole lot of information pointing to a cause and remedy. So I’m posting this in the hopes that it will help the Google Lords find a solution more quickly for others.

So if you are using Razor templates in Asp.Net and you issue a Response.Redirect at some point either in the template itself or from a method that the template calls like Html.RenderAction for example, and then you later set a response  filter by calling Response.Filter, you will receive an HttpException stating “Filtering is not allowed.” It may look a little like this friendly message:

System.Web.HttpException (0x80004005): Filtering is not allowed.at System.Web.HttpResponse.set_Filter(Stream value)at RequestReduce.Module.RequestReduceModule.InstallFilter(HttpContextBase context) in c:\RequestReduce\RequestReduce\Module\RequestReduceModule.cs:line 223at System.Web.HttpApplication.SendResponseExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

For example, consider this Razor page:

@{    Layout = null;}

<!DOCTYPE html>

<html><head runat="server">    <title>index</title></head><body>    <div>@{ Html.RenderAction("Redirect"); }    </div></body></html>

Now assume we have a very simple action method like this:

public ActionResult Redirect(){    Response.Redirect("/Home/target");    return View();}

Ok. That’s kind of silly code but this is just to illustrate an example. And besides, who doesn’t like to be silly sometimes. Now it so happens that You have an HttpModule that sets a Response Filter. Here is what you should do:

if(!Response.IsRequestBeingRedirected)    Response.Filter = myFilter;

If you neglect to use the If statement checking for a redirect, you will be sorry. Unless of course you enjoy a white background with black text inside yellow boxes.

While my encounter to this has been with Razor templates and this does not reproduce using Web Forms views, I would still check for redirects before setting a Response Filter in any context. There is no reason that I can think of to ever filter a response that is being redirected.

Microsoft blogging platform gains 33% performance boost after adopting RequestReduce by Matt Wrock

Here is a Keynote performance graph covering a few hours before and after launch.

pic.twitter.com/UoqwYVWz

Looks good to me.

A win for Microsoft. Another win for the Open Source Community

Today Microsoft completed its onboarding of RequestReduce on its MSDN and Technet blogging platform. Huge thanks and shout out to Matt Hawley(@matthawley) who played a pivotal roIe in this launch! In fact, he DID launch it. Matt also made a significant contribution to RequestReduce by yanking out the .net 4.0 dependent SqlServer Store into a separate package and tweaking the build script to make it compatible with .net 3.5.

I am very pleased to report that upon launch, readers can now access a site that is 33% faster than it was before launch. This is a win for Microsoft, the multitudes of readers that visit its content every day and a win for the open source software (OSS) community at large. I like to think that this demonstrates Microsoft’s growing commitment to OSS. In just the past couple of years, Microsoft has made giant strides to fostering the open source ecosystem. This is just another small step forward.

Just to be clear: I am a Microsoft employee. However, I develop RequestReduce on my own time away from work and there have been non-Microsoft contributions. The features that I build into RequestReduce do not originate from some grand Microsoft Project gnat chart. My releases of RequestReduce downloads do not require multiple levels of sign off from the Microsoft covenant of elders.

Furthermore, my team and the teams I work closely with use tons or various OSS projects with the full blessing of the Microsoft Legal department (I hear they wear special underwear but am not sure – more on this later). We use nHibernate, StructureMap, Moq, XUnit, Caste Windsor, Service Stack, Json.Net, PSake and a lot more. These are baked into Microsoft properties that many .net devs visit every day like the Visual Studio Gallery, various MSDN properties and even the web service you call when you go to Visual Studio’s Extension Manager.

RequestReduce: Optimized for improving performance of brown field apps

While RequestReduce is suited to optimize any website, it is particularly ideal for sites that have already been built and suffering from poor, and sometimes extremely poor, performance. The MSDN and Technet blogging platform is a perfect example of this. They are built on top of 3rd party non-Microsoft blogging software that uses asp.net 3.5. Lets just say there were no lack of webresource.axd and scriptresource.axd especially if you are fond of fitting…say…30 or 40 in a single page. I mean why not? Certainly god created the .axd extension for a reason and intended it to multiply and be fruitful.

RequestReduce is ideally architected for these  scenarios. It follows a simple drop in and just work model. Unlike a lot of the other minify and bundling solutions it filters your page and dynamically locates your JavaScript and CSS regardless if its on your server or twitter’s CDN. It can process any resource with a text/css or JavaScript  mime type even if they are dynamically generated like in the case of ScriptResources. It works well with high traffic, multi server enterprise topologies because it is fast and provides multiple caching solutions including one that caches in a central SqlServer instance.

So what exactly does RequestReduce do?

Well after helping you lose weight, quit smoking and significantly enhancing your sex life (I did say “significantly” right?…good), it makes your website faster by minimizing your css and JavaScript. Combining your css and what JavaScript it can without breaking your scripts and it attempts to locate background images it can combine into sprite files. It also optimizes the color palette and compression of these sprites. Its like running YSLOW and then clicking the “optimize now” button. Really? You haven’t seen that button?

The end result is a less bytes and fewer HTTP requests your browser has to handle to render a page with very little work. Do you really want to generate your own sprites each time you are handed a new image or remember you have forgotten to add that new script to your bundling config?  Do you get frustrated when your site goes to production and you realize you forgot to spell Cace-Cntrl and your test team did not catch this because they were busy ensuring your app solved world hunger even if a user entered a space followed by an asterisk in your search box? Well you can get this functionality on your own site now at http://www.requestreduce.com or via Nuget.