My new hybrid static-site

I'm pushing out a pretty big update to my blog today. There's a new design, and I replaced my custom Node.js server with what I'm calling a "hybrid" static-site.

Backstory

It all started more than a year ago when a friend introduced me to MarsEdit.

MarsEdit is a fantastic Mac app for editing blog content. It's got an all-native UI, and works with WordPress, Tumblr, Blogger and several other blogging engines. I knew I wanted to find a way to use it with my own blog.

Around the same time, I'd been thinking about switching my blog over to a statically-generated site. I'd had several static sites in the past, and always enjoyed the simplicity of them. The downside with static sites is updating them. Having to open up my laptop and make git commits every time I want to post something is just too much of a hassle.

I decided to break apart my blog into two pieces, one to hold the actual blog data, and the other to do all the HTML/CSS/JS generation.

The details

overview

Overview of my hybrid content system

The data storage app uses Ruby, Rack, ActiveRecord, and Postgres. In order to make it work with MarsEdit, I replicated parts of WordPress XMLRPC API, which is what MarsEdit uses to edit WordPress sites. So all the posts, pages and categories in the app are standard ActiveRecord models, but to MarsEdit everything looks like WordPress content! I also gave the ruby app a simple, read-only JSON API, so that the content would be accessible by other non-WordPress programs.

For the content-rendering part, I went with Metalsmith, a Node.js static site generator. For the most part, it's just like a normal static site. It's got layouts, stylesheets, JS, and all. But right before the generator runs, it crawls the ruby app's JSON API and save all the posts locally as plain .md files. So to the generator, it just looks like it's building a bunch of markdown files like the rest of the project!

Now I have the best of both words; all I have to do to write a new post is open up MarsEdit, publish the post, and rebuild my static-site.

More to come later

There are quite a few more details in there that I hope to cover in later posts. In the mean time, all of the code is open-source on GitHub if want to poke around:

You can also send me your questions on twitter at @stevenschobert.

Finding the best wireless channel in OS X

If you've found yourself wanting to know what the best wireless channels in your area are, and you have a Mac available, you're in luck! OS X ships with a GUI application that can help you out.

Wireless Diagnostics can tell you which channels are currently being used, and give you a recommendation as to the best one available.

You can launch Wireless Diagnostics through spotlight, or by Alt+Clicking the wireless icon in your toolbar, and clicking Open Wireless Diagnostics....

Open wireless diagnostics

Once Wireless Diagnostics opens, you'll see an introduction screen offering to scan your network for problems, but you can just ignore this window (but leave it open, or the app will stop running).

Next, go to the menubar and choose Window > Scan.

Select 'Scan' from Window menu

This will bring up the Scan window, where you can see a full list of the networks in your area, and additional details about each network. You can click Scan Now anytime to refresh your results.

The Scan window will also have a panel on the left that displays which wireless channels it thinks are the best available options, for both 2.4GHz and 5GHz.

Best wireless channels after scan

Considerations

As in most cases, the term "best" is just an approximation. Wireless channels are complicated, and just because one channel is open doesn't mean it's automatically the best choice. Since wireless channels are actually ranges of frequencies and they overlap with each other, neighboring channels can interfere with each other even though they are separate. There are some very interesting threads on Super User about the subject, if you're curious.

A weekend with Go

Over the weekend, I was tinkering with data encryption between iOS apps and backend APIs, which led me to find Rob Napier's popular RNCryptor library.

Along with the great Objective-C API, RNCryptor also comes in several other languages that can be used on the server-side for decryption.

Porting RNCryptor to Go

For the sake of portability, I was looking to make a single binary that would handle all of the decryption and recording of data sent from its counterpart iOS apps, regardless of the particular web-stack the server existed in. So I thought I’d give Go a try.

Come to find out, RNCryptor didn’t have an implementation in Go yet (though it was on their todo-list).

So I set out to write one. Having never written any Go code before, there was a lot of googling and doc-reading invovled. Thankfully, Go’s standard crypto package provided everything I needed, it was just a matter of finding the right APIs for the job. The final implementation ended up being ~150 lines of code.

After reaching out to Rob, he was super helpful, and helped migrate the repo to the RNCryptor Organization along with the other languages!

You can check out the end-result at github.com/RNCryptor/RNCryptor-go.

First Impressions with Go

I really enjoyed my first few hours with Go. The standard packages are really rich, and well documented. The built-in support for testing was also a big plus for me.

The language definitely felt minimal. There’s no classes. No default parameters, function overloading, or custom operators. You get structs and interfaces, that’s it. This is very different from something like Swift, where you can bend the language to your will pretty easily. But after accepting the constraints Go has, it was a fun change of pace. The language seems to give you just enough to get the job done, and no more. Which I kinda like.

The workflows for writing Go seems a little wild-west-ish. There is no central package manager, which can make discovering 3rd party packages a little hard. You also have to keep all your Go code in a single “workspace” (like keeping it all under ~/go), which is mildly annoying for me, since I like to keep my code organized by projects, not by language.

Overall though, I really enjoyed getting to hack on something new, and the fact that I was able to create my first Go package in a few hours says a lot about the Go ecosystem I think.

Hello Oven Bits!

the awesome wall at Oven Bits

I’m super excited to say that I’ll be joining the awesome folks at Oven Bits!

Oven Bits is a small, focused team here in Dallas that makes superb mobile and web apps. They’re the guys & gals behind apps like Over and Instead.

After meeting the whole team and seeing some of the cool things they’re working on, I know it’ll be a really great fit for me, and I’m excited to be a part of what they’re doing.

You can check them out at ovenbits.com.

Using Arrays and Objects in Backbone.js Models

If you've ever attempted to use arrays or objects on a Backbone.js model, you might have run into some strange behavior.

At first glance, everything seams perfectly normal. You can define a model with arrays/objects as properties no problem.

var Dog = Backbone.Model.extend({
  defaults: {
    legs: 4,
    likes: ["eating", "belly rubbs", "barking at people"]
  }
});

var skip = new Dog({ name: "Skip" });
skip.get('likes');
// -> ["eating", "belly rubbs", "barking at people"]

But as soon as you try to change those properties you'll see that your model doesn't fire change events consistently.

Setting the property to a new array does fire the change event.

skip.set('likes', ["playing fetch"]);
// -> 'changed!'

skip.get('likes');
// -> ["playing fetch"]

Using .get() and changing the array does not fire the change event, but does change the model.

skip.get('likes').push("eating shoes");
// ->

skip.get('likes');
// -> ["playing fetch", "eating shoes"]

(now it gets really weird)

Using .get(), changing the array, and re-setting it also does not fire the change event, but does change the model.

var newlikes = skip.get('likes');
newlikes.push("walks");
// our model is still the same

skip.set('likes', newlikes);
// ->

skip.get('likes');
// -> ["playing fetch", "eating shoes", "walks"]

So what to do now?

Always use _.clone()

Underscore (a dependency of backbone), includes a handy clone() function that can fix this problem. If you clone the array first, it will behave just like any other property.

var newlikes2 = _.clone(skip.get('likes'));
newlikes2.pop();

skip.set('likes', newlikes2);
// -> 'changed!'

skip.get('likes');
// -> ["playing fetch", "eating shoes"]

This problem can be especially tricky to catch, because you most often won't notice it until your views are only rendering part of the time.

If you want to read more on this problem, there's a good in-depth answer on stackoverflow on the subject.

Custom Post Taxonomies with wp_cron()

I recently worked on a custom Wordpress plugin that uses wp_cron() to automatically fetch data from an API and insert new posts into the Wordpress database. While working on the plugin, I ran across a small gotcha when it comes to using custom taxonomies inside wp_cron().

The Problem

The posts would be successfully fetched from the external API and inserted into the Wordpress database, but without any of the taxonomy data.

The Cause

The problem comes from the way wp_insert_post() checks user permissions while inserting posts.

Normally, when inserting posts into the database, I'll insert the post and all it's taxonomy data using wp_insert_post():

$post = array(
  'post_title'    => 'Hello World',
  'post_content'  => 'This is a post. It has words.',
  'post_type'     => 'my_custom_post_type',
  'post_status'   => 'publish',
  'tax_input'     => array(
    'my_custom_taxonomy' => array('wordpress', 'syncing')
  )
);
wp_insert_post($post);

Under most circumstances, this will work perfectly fine. However, if you look in the Wordpress codex you'll find this little note:

NOTE 4: If the current user doesn't have the capability to work with a custom taxonomy then using tax_input to add a term won't work. You will need to use wp_set_object_terms().

Turns out, that the wp_cron() function does not have permissions to create/update any taxonomy by itself, and unfortunately it won't throw any errors, it will just skip the tax_input part of your post object.

The Fix

Good news is that can be easily corrected by using wp_set_object_terms(), which doesn't have the same security restrictions as wp_insert_post():

// create the object without tax_input
$post = array(
  'post_title'    => 'Hello World',
  'post_content'  => 'This is a post. It has words.',
  'post_type'     => 'my_custom_post_type',
  'post_status'   => 'publish',
);

// get an id when the post is created
$id = wp_insert_post($post);

// now set the taxonomy
wp_set_object_terms($id, array('wordpress', 'syncing'), 'my_custom_taxonomy');

I hope it helps!

For older posts, check the Archive page.