24 Feb 2012, 13:29

Backups and data redundancy for the paranoid

Data backup is one of those things that everybody talks about, few people do, fewer people do well, and fewer still have actually tested.

What makes a good backup strategy?

To me, a backup strategy needs to have a few qualities.

  • It has to be easy. If it’s not, you won’t keep up with it.
  • It has to be reliable. Backing up your data won’t do you any good if your backups aren’t good.
  • It has to be redundant. Backups can go bad too.
  • It has to be recoverable from. If you’re encrypting your backups and you forget your key, they’re useless.

Now, what brought me here, and how did I attain those goals?

First, why so seriousparanoid?

I’ve been paranoid about data loss for a long time, and I spent a good deal of time and effort trying to figure out what the best strategy would be for me that met all of the requirements that I outlined above. But why was I so paranoid to begin with?

When I was a freshman in college I experienced my first hard drive failure. My Western Digital hard drive suddenly gave up the ghost, taking with it all of my software that I had written over the years (much of it written in x86 assembly) with it. Try as I may, I couldn’t recover anything. I would have paid anything then to get that data back, but being a poor college student professional data recovery wasn’t an option. With no real backups, my entire digital life to this point was wiped clean.

It was at this point that I learned the importance of backups. I didn’t, however, learn the importance of a good backup strategy. To that end, I would burn CDs and email myself copies of things that were REALLY important. Other important things were zipped up and stored on another hard drive. Sometimes I’d just copy and paste a folder somewhere else. I’d have multiple copies of things floating around, and no real way to tell which was the most recent, or most correct.

At the time, I thought that this worked. Mostly because I just didn’t know any better. Had I experienced a drive failure during that period, I’d have been sent on a wild goose chase through my old email, unlabeled physical media, and folders upon folders of copies of various files and zip archives. I’ve since seen the error my ways. In part because I’ve gotten smarter, and in part because technology has gotten smarter.

A new strategy is born

My new backup strategy is much, much more robust, easy to manage, and easy to recover from.

Technology

Hardware:
Software:

How Does This Work Together?

  • Every 4 hours, CrashPlan backs up changes to the Virtual Machine on the Secondary Drive to the External Drive (encrypted with TrueCrypt), the Drobo, and to CrashPlan+.

  • Once per week, DroboCopy copies the Virtual Machine to the Drobo. This is done to give me an instantly available copy-and-paste snapshot of the server to get back up while I recover the most recent version through a CrashPlan restore.

  • In real-time, CrashPlan watches for changes to anything of high value on the Drobo and backs those changes up to the External Drive CrashPlan+.

  • Those HVFs, in addition to source code, pictures, tax returns, and the like, include scans of important physical documents (product warranties, contracts, receipts, etc.) from the Artisan 835. The original physical documents are kept in a separate fire/water-proof safe. In addition, using the Artisan, I create hard copies of digital documents (receipts and the like) for physical storage.

  • Source code is also stored in git repositories on the Virtual Machine so that I have full revision history for any project that I’m working on (my old CVS repositories have been deprecated and converted to git repositories).

What does all of this do for me? I have several points of recovery available, and aside from the OS and applications which have no irreplaceable value, I have no more than 4 hours of unrecoverable data. This all took quite a while to set up, but the peace of mind is worth it. The backups are pretty much out-of-site, out-of-mind, and I never have to worry about a manual step to protect my data.

Is it excessive? Perhaps, but I never worry about losing another piece of important data again.

There are a couple of things that I’d like to improve, but they’re not critical. One, I’d like to upgrade to a DroboFS to remove the dependency of having my Drobo physically attached to my primary machine. Second, I really wish CrashPlan would allow me to add machines to my account without buying a family subscription. I only have one more machine I’d like to add (the Virtual Machine), and the cost of a family subscription just isn’t worth it when I work around that limitation by backing up the entire machine (since it’s just a set of files). It’s just annoying.

Update

I’ve since upgraded from the Gen2 Drobo mentioned above to a DroboFS (2x 3Tb, 2x 2Tb, 1x 1Tb with dual-drive redundancy). In addition to the speed benefits and the obvious benefits of being a NAS, my paranoia during rebuilds for the array makes dual-drive redundancy a must have. Unfortunately, the DroboFS is currently having a lot of different issues (though none that seem to be putting my data at risk). I have a support ticket in with Data Robotics and hopefully they can address the issues.

16 Nov 2011, 07:07

event.layerX and event.layerY are broken and deprecated in WebKit.

If you’ve been testing any web development against Chrome developer builds, you’ve quite possibly seen this warning show up in your console:

event.layerX and event.layerY are broken and deprecated in WebKit.They will be removed from the engine in the near future.

I found a handy little Javascript that will alleviate this problem (though I can’t seem to find the source of it now; if you do, please let me know). It looks like this:

// Prevent "event.layerX and event.layerY are broken and deprecated in WebKit. They will be removed from the engine in the near future."
// in latest Chrome builds.
(function () {
    // remove layerX and layerY
    var all = $.event.props,
        len = all.length,
        res = [];
    while (len--) {
        var el = all[len];
        if (el != 'layerX' && el != 'layerY') res.push(el);
    }
    $.event.props = res;
} ());

This self executing function goes through jQuery’s $.event properties and removes the references to event.layerX and event.layerY so that jQuery won’t try to copy them to new objects in the future.

The root of the problem is that whenever jQuery binds events, it copies those properties. If you execute this function before you do any event binding with jQuery, those properties don’t exist to be copied. Bye bye warnings!

Updates:

This address the problem for jQuery >= 1.7: > jQuery Ticket #10531: Consider removing layerX and layerY from $.event.props

The source of the snippet appears to be http://jsperf.com/removing-event-props/2 via http://stackoverflow.com/questions/7825448/webkit-issues-with-event-layerx-and-event-layery.

03 Nov 2011, 14:40

Browser specific Javascript loading with jquery.loadScriptForBrowser.js

It’s not very often that I write Javascript that needs to be downloaded and run by only one browser, but when I have to, I want it to be easy. I don’t want to waste a bunch of time doing user agent parsing and checking, I want to just write what I need for that specific browser, and go on my way.

A quick look through the Googles didn’t really yield anything that I could use, so I threw together a jQuery plugin; jquery.loadScriptForBrowser.js. The browser detection functionality & logic is from the excellent jQuery Browser Plugin (which I’ve used many, many times before to allow me to target CSS to specific browsers without having to resort to difficult to read, maintain, and understand hacks). Just as the code from the jQuery Browser Plugin, this plugin is MIT licensed.

Sample usage for this plugin:

<script type="text/javascript" src="./js/jquery.loadScriptForBrowser.min.js"></script>
<script type="text/javascript">
    $.loadScriptForBrowser({
        'chrome': [
            './js/chrome.specific.js'
        ],
        'msie': [
            function(){console.log('Browser: MSIE')}
        ]
    });
</script>

Not much to it. You can check it out at http://git.sdb.cc/projects/jquery-loadscriptforbrowser-js, or just go ahead and clone the repository: git clone git://git.sdb.cc/jquery-loadscriptforbrowser-js.

25 Oct 2011, 14:54

Jasmine, Chrome, and Access-Control-Allow-Origin

I recently updated a project I’ve been working on from jQuery 1.4 to jQuery 1.6.4 and ran the suite of Javascript unit tests (written in Jasmine) associated with it. It took me quite a while and lots of digging and debugging before I even noticed that Chrome’s console had logged a few errors. One of which was:

Origin null is not allowed by Access-Control-Allow-Origin when trying to call loadFixtures();

Well of course this would be responsible for all of my jQuery selectors coming back empty for elements that I knew existed; Jasmine couldn’t load the fixture so there weren’t any elements to actually select.

As it turns out, this issue is isolated Chrome’s behavior and how it deals with accessing local files. You’ll see similar behavior when trying to fire off AJAX requests in some situations. You can work around this problem for local debugging by calling Chrome as chrome.exe --allow-file-access-from-files which will disable that access control. After launching Chrome with that parameter, running the test suite passed (as it should) and all was happy!

10 Sep 2011, 13:29

Re-Revisited: Strongly typed routes for ASP.NET MVC

I finally stopped being lazy, and got git configured and rolling. You can now find the RouteCollectionExtensions class available at http://git.sdb.cc/projects/RouteCollectionExtensions, and if you so desire, you can clone it out of the read-only repo using:

git clone git://git.sdb.cc/RouteCollectionExtensions

Issue and support tracking is available there as well.

21 Jul 2011, 12:08

Revisited: Strongly typed routes for ASP.NET MVC

As I mentioned here, I was a bit unimpressed with the way routes were defined in ASP.NET MVC3. They weren’t type safe, and as such, were prone to many errors. I implemented a solution that addressed that well enough for my particular desires, but it was still (as I mentioned at the end of that article) a bit too wordy for my liking.

Well, that kept getting on my nerves, so I reworked a lot of what I had previously done.

RouteCollectionExtensions.cs

using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq.Expressions;
using System.Web.Mvc;
using System.Reflection;
using System.Web;
using System.Web.Routing;

namespace Sdbarker.Extensions.RouteCollectionExtensions {
    public static class RouteCollectionExtensions {
        /// <summary>
        /// Maps a URL route and sets the default values.
        /// </summary>
        /// <typeparam name="TController"></typeparam>
        /// <param name="action">The method that this route will call on the specified controller.</param>
        /// <param name="name">Optional: A string that specifies the name of the route (automatically generated as ControllerMethod if null or empty)</param>
        /// <param name="url">Optional: A string that specifies the URL for the route (automatically generated as Controller/Method if null or empty)</param>
        /// <param name="defaults">Optional: An object that contains default route values.</param>
        /// <param name="constraints">Optional: An object that specifies the contraints of the route.</param>
        /// <returns>The generated Route object.</returns>

        public static Route MapDefaultRoute(this RouteCollection routes) {
            return routes.MapRoute(
                "Default",
                "{controller}/{action}/{id}",
                new { controller = "Home", action = "Index", id = UrlParameter.Optional }
            );
        }
        public static Route MapRoute<TController>(this RouteCollection routes, Expression<Func<TController, ActionResult>> action, string name = null, string url = null, object defaults = null, object constraints = null) where TController : IController {
            MethodCallExpression m = (MethodCallExpression)action.Body;
            if (m.Method.ReturnType != typeof(ActionResult)) {
                throw new ArgumentException("ControllerAction method '" + m.Method.Name + "' does not return type ActionResult");
            }
            if (string.IsNullOrEmpty(name)) {
                name = typeof(TController).Name + m.Method.Name;
            }

            if (string.IsNullOrEmpty(url)) {
                url = string.Format("{0}/{1}", typeof(TController).Name.RemoveLastInstanceOf("Controller"), m.Method.Name);
            }

            if (defaults == null) {
                defaults = new { action = m.Method.Name };
            }
            else {
                defaults = new RouteValueDictionary(defaults);
                (defaults as RouteValueDictionary).Add("action", m.Method.Name);
            }

            return routes.MapRoute<TController>(name, url, defaults, constraints);
        }

        private static Route MapRoute<TController>(this RouteCollection routes, string name, string url, object defaults, object constraints) where TController : IController {
            Route route = new Route(url, new MvcRouteHandler()) {
                Defaults = (defaults is RouteValueDictionary) ? (defaults as RouteValueDictionary) : new RouteValueDictionary(defaults),
                Constraints = new RouteValueDictionary(constraints)
            };


            // If a controller property was specified in the defaults, argument exception; someone is doing something that they're not aware of
            string controller = typeof(TController).Name.RemoveLastInstanceOf("Controller");
            if (route.Defaults.ContainsKey("controller")) {
                throw new ArgumentException("Defaults contains key 'controller', but using a strongly typed route.");
                // route.Defaults["controller"] = controller;
            }
            else {
                route.Defaults.Add("controller", controller);
            }

            // Move the original action (which is really a ControllerAction) to the controllerAction key
            // and then specify our own action value, otherwise routing will flip out
            object action = null;
            if (route.Defaults.TryGetValue("action", out action)) {
                if (action.GetType().IsGenericType && action.GetType().GetGenericTypeDefinition() == typeof(ControllerAction<>)) {
                    route.Defaults.Add("controllerAction", route.Defaults["action"]);
                    route.Defaults["action"] = route.Defaults["controllerAction"].ToString();
                }
            }

            routes.Add(name, route);
            return route;
        }

        private static string RemoveLastInstanceOf(this string text, string remove) {
            return text.Remove(text.LastIndexOf(remove));
        }
    }

    public class ControllerAction<TController> where TController : IController {
        public string Action { get; private set; }

        public override string ToString() {
            return Action;
        }

        public static implicit operator string(ControllerAction<TController> controllerAction) {
            return controllerAction.Action;
        }

        public ControllerAction(Expression<Func<TController, ActionResult>> action) {
            MethodCallExpression m = (MethodCallExpression)action.Body;
            if (m.Method.ReturnType != typeof(ActionResult)) {
                throw new ArgumentException("ControllerAction method '" + m.Method.Name + "' does not return type ActionResult");
            }
            Action = m.Method.Name;
        }
    }
}

This long bit of neatness now lets you declare in a much cleaner syntax. Like so:

routes.MapRoute<HomeController>(controller => controller.Index());
routes.MapRoute<HomeController>(controller => controller.TestPost(new Models.HomeModel()));
routes.MapRoute<HomeController>(controller => controller.WithParams(0, ""), url: "Home/WithParams/{id}/{val}", defaults: new { id = 7, val = "foo" });
routes.MapDefaultRoute();

That’s much, much prettier (to me anyway).

16 Jul 2011, 23:05

Enable Custom Fluent Validation Validators on the Client Side

Fluent Validation is a great, powerful, fluent (obviously) validation library for .NET. It does a very good job of simplifying and applying validation rules to your models. It also does a great job melding with MVC3. Combine that with jQuery Validate and jquery.validate.unobtrusive, and you have some really simple, really powerful validation that mirrors itself on the client-side, effectively preventing a user to make a round-trip server request for a form that’s going to fail validation anyway. All great stuff!

That simplification, though, is quite possibly its downfall. If you’re doing any kind of complex validation, while Fluent can probably accommodate you, getting those rules to the client side becomes a bit more tricky. Both Fluent and the jQuery Validate libraries are extensible, but Fluent won’t send most of its complex rules to the client side. So what do you do when you have more complex validation and you still want the slick client-side validation? It turns out, you do quite a lot. But the result is fantastic. Let’s take a look at creating a custom PropertyValidator for Fluent, registering it with Fluent using FluentValidationModelValidatorProvider, creating an extension method to give us the slick chaining with the rest of the Fluent validators, and finally wiring it all up on the client side for jQuery’s Validate libraries.

Let’s get to it!

This is a VERY trivial example just to illustrate a pattern to implement your own more complex validation logic. Let’s start by taking a look at creating a custom PropertyEvaluator. In this case, we’re going to create one that requires the value passed to it be equal to the string “foo”.

internal interface IEqualsFooValidator { }

public class EqualsFooValidator : PropertyValidator, IEqualsFooValidator {
	private PropertySelector _propertyFunc;

	public EqualsFooValidator(PropertySelector propertySelector)
		: base(() => "EqualsFooValidator Error") {
		_propertyFunc = propertySelector;
	}

	protected override bool IsValid(PropertyValidatorContext context) {
		string value = (string)_propertyFunc(context.Instance);

		if (string.Equals(value, "foo")) {
			return true;
		}
		return false;
	}
}

Pretty straight forward stuff there. We have our EqualsFooValidator as a Fluent PropertyValidator which takes a PropertySelector as its parameter. We use that parameter to get the value of the property that has to equal “foo”, do that comparison, and return our validator state. Ignore the fact that you can pull the property value from context.PropertyValue and skip passing in a PropertySelector. I’m showing more complex behavior here. :)

Now after we’ve got that set up, we have to create a FluentValidationPropertyValidator to use as our adaptor for MVC to get its little mitts on and generate our client side rules. A little magic like so:

public class EqualsFooValidatorAdaptor : FluentValidationPropertyValidator {
	private const string ValidationType = "equalsfoo";

	private IEqualsFooValidator FooValidator {
		get { return (IEqualsFooValidator)Validator; }
	}

	public EqualsFooValidatorAdaptor(ModelMetadata metadata, ControllerContext controllerContext, PropertyRule rule, IPropertyValidator validator)
		: base(metadata, controllerContext, rule, validator) {
		ShouldValidate = false;
	}

	public override IEnumerable<ModelClientValidationRule> GetClientValidationRules() {
		if (!ShouldGenerateClientSideRules()) yield break;

		var formatter = new MessageFormatter().AppendPropertyName(Rule.PropertyName);
		string message = formatter.BuildMessage(Validator.ErrorMessageSource.GetString());

		yield return new ModelClientValidationRule {
			ValidationType = ValidationType,
			ErrorMessage = message
		};
	}
}

The magic here is in GetClientValidationRules, which is returning our ValidationType. ValidationType is what we want our client-side rule to be called and what we’ll wire up for jQuery Validate and unobtrusive.

Now that we have our validator and adaptor, let’s tie those up to our model by creating some extenions for us. In your typical extension method fashion, you’ll have this little number:

 public static IRuleBuilderOptions<TModel, TProperty> EqualsFoo<TModel, TProperty>(
	this IRuleBuilder<TModel, TProperty> ruleBuilder,
	PropertySelector propertySelector) {
		return ruleBuilder.SetValidator(new EqualsFooValidator(propertySelector));
}

Again, nothing terribly special here, just setting the validator from the ruleBuilder, and returning itself so we can chain like we normally do with Fluent.

That’s all we’ve got for the server side stuff. Now we have the client-side things to wire up. That voodoo (which isn’t really voodoo) looks like so:

jQuery.validator.addMethod('equalsfoo', function (value, element) {
	var fooVal = $(element).val();

	if (fooVal == "foo") {
		return true;
	}
	return false;
}, jQuery.validator.messages.equalsfoo);

jQuery.validator.unobtrusive.adapters.addBool("equalsfoo");

This tells jQuery Validate and unobtrusive to look for our validator, and how to handle it.

Now that we’ve got all of the functional parts there, we have to make sure that Fluent knows to register our new custom rule. We do that in our Global.asax like this:

var provider = new FluentValidationModelValidatorProvider(new AttributedValidatorFactory());
provider.Add(typeof(EqualsFooValidator), (metadata, context, rule, validator) => new EqualsFooValidatorAdaptor(metadata, context, rule,validator));
ModelValidatorProviders.Providers.Add(provider);

24 Jun 2011, 15:26

Strongly typed routes for ASP.NET MVC

####EDIT: I’ve revisted this here: Revisited: Strongly typed routes for ASP.NET MVC

The other day, a coworker had a problem where a route that I defined for a specific page was returning a 404.  I couldn’t reproduce the problem locally.  As it turns out his project was missing the entire controller class that this route depended on.  As I’m sure you’re well aware, routes are normally defined similarly to this:

routes.MapRoute(
    "BuildVersion",
    "build_version.html",
    new {
        controller    = "BuildVersion",
        action        = "DisplayBuildVersion"
    }
);

This presents a few points of failure, all stemming from the same issue; controllers and actions are specified as strings.  If the classes or methods that are specified here change, you won’t find out until run time, and you’ll only find out in the way of getting a 404 for your requested URL.  To help address this, I created a RouteCollectionExtensions class that provides a way to map routes with strong typing:

RouteCollection.MapRoute<IController>(
    "Route Name",
    "Route URL",
    new {
        action = new ControllerAction<IController>(c => c.ActionResult()).Action
        }
);

As an example:

routes.MapRoute<BuildVersionController>(
    "BuildVersionHtml",
    "build_version.html",
    new {
        action = new ControllerAction<BuildVersionController>(c => c.DisplayBuildVersion()).Action
    }
);

Using this method, if the class for the controller or the method for the action changes, you’ll get a compile time error.  The ControllerAction portion is a little more wordy than I’d like, but it works and is readable.

Sound interesting? Check out the implementation.

The implementation looks like this:

using System;
using System.Linq.Expressions;
using System.Web.Mvc;
using System.Web.Routing;

    public static class RouteCollectionExtensions
    {
        public static Route MapRoute(this RouteCollection routes, string name, string url, object defaults)
            where TController : IController
        {
            return routes.MapRoute(name, url, defaults, null);
        }

        public static Route MapRoute(this RouteCollection routes, string name, string url, object defaults, object constraints)
            where TController : IController
        {
            Route route = new Route(url, new MvcRouteHandler())
                              {
                                  Defaults = new RouteValueDictionary(defaults),
                                  Constraints = new RouteValueDictionary(constraints)
                              };

            route.Defaults.Add("controller", typeof(TController).Name.RemoveLastInstanceOf("Controller"));
            routes.Add(name, route);
            return route;
        }

        private static string RemoveLastInstanceOf(this string text, string remove)
        {
            return text.Remove(text.LastIndexOf(remove));
        }
    }

    public class ControllerAction where TController : IController
    {
        public string Action { get; private set; }

        public override string ToString()
        {
            return Action;
        }

        public static implicit operator string(ControllerAction controllerAction)
        {
            return controllerAction.Action;
        }

        public ControllerAction(Expression> action)
        {
            MethodCallExpression m = (MethodCallExpression)action.Body;
            if (m.Method.ReturnType != typeof(ActionResult)) {
                throw new ArgumentException("ControllerAction method '" + m.Method.Name + "' does not return type ActionResult");
            }
            Action = m.Method.Name;
        }
    }

I’m sure this could be improved to be much more thorough, but as it stands it accomplishes my immediate goals.

18 Apr 2011, 13:27

MIX11: Day 3 Summary

Day 3 of MIX was a pretty slow day when compared to the other two, which is to expected with the conference winding down.  There was more of the same focus as seen previously; mostly HTML5/IE9, and WP7.  We learned some more about some of the updates coming in WP7’s Mango update, and got a little more insight in to the future of HTML5 and the IE9.  There was some good information about WP7 application performance (which I covered more in depth here: http://blog.sdbarker.com/2011/04/14/windows-phone-application-performance/), and some talk about how to write maintainable Javascript/jQuery code (which was really all stuff that developers should be doing anyway, for every bit of code that they write.)

Overall, more good information, and a good ending to a great conference!

18 Apr 2011, 13:22

Mango's Enhanced Push Notifications and Live Tiles

There are quite a few changes coming in Mango regarding push notifications and live tiles.  The developer community was very outspoken about their needs for these features of the WP7 platform, and Microsoft took note and made some changes.

Changes coming Mango include:

  • A local tile API to circumvent the need to have to post a tile notification request to a service elsewhere or running on the device.

  • Back-of-tile support, to update the back of your tile the same way you update the front.  Tiles with updated backs will flip at random intervals.

  • Support for multiple tiles for your application, and deep linking tiles in to your application (this works with toast notifications from the notification service too!)

  • The tile limit has been upped from 15 to 30

These are all very often requested changes from developers, and it’ll be great to have them available.  One of the things that’s noted about application popularity in the marketplace is that applications with fancy tiles are always more popular than those without.

Happy tiling (and toasting)!