When the iPad came out, I was hoping that, because of the low screen width in portrait mode (768 pixels), web developers would turn off their sidebars when low horizontal browser widths were encountered. This would make it so, even if I didn’t buy an iPad, I could benefit by when displaying my browser and text editor side-by-side. Alas, I haven’t seen many sites changing to look better with lower widths. I think the main reasons for this are:

  • iPad uptake, while significant, hasn’t got web developers who are set in their ways (myself included) to change their layouts
  • The iPad’s pan and zoom feature is so convenient, horizontal scrolling hasn’t annoyed users that much
  • iPad users are compensating by using their iPad in horizontal mode most of the time when browsing, or by rotating when they come to wide websites

With this in mind, I’ve noticed when I see a site I like that doesn’t take up too much width, both while using my 15″ MBP and while using my iPad. I’ve noticed that having a sidebar rarely adds to my experience of reading a blog entry. I don’t need help forming biases toward the writer by seeing his or her picture. I don’t need to know how old the blog is, or what other topics the author blogs about. When I’m on somebody’s blog for the first time, usually the only thing I care about is the content of the article. If I enjoy the article, I may bookmark the blog and poke around some more later*.

I also like narrow text. So, a blog layout I like is one that leaves half the page empty, when I’m viewing it full-screen on my MBP. This isn’t weird to me. It doesn’t feel like wasted space. It doesn’t bother me that if I wanted, I could use the space for something else. In fact, I love it!

Here’s a problem. When I’m reading it on the go, it’s nice that the dip is small and easy to carry. However, when I’m at home, I don’t need the small size. Wouldn’t it be nice if, while I’m reading it, I could have a little information about Seth Godin and some links to other books in the sidebar? Here’s a proof-of-concept:

web-in-physical-world

Right now my site contains a sidebar with a tag cloud, links to archives, and some other stuff. It’s days are numbered. I may replace it with a link to another page, a drop-down menu, or nothing. I don’t know if this goes against anything in Ambient Findability (which I hear is an excellent book). I doubt it. None of the stuff in my sidebar is connected to this article in any significant way. I often find links to related articles helpful, though.

* Ideally. I just realized that if I’m trying to do something, and I find a great blog entry that helps me, I should get back to doing that thing, rather than looking for other blog entries right away. This, I think, is a large part of what gets me spending too much time browsing the Internet.

I just realized one big thing that Freckle and Stack Overflow have in common: they have virtuosity built into the definition of their target market.

Freckle‘s target market is defined as small businesses where everyone trusts everyone else. That’s how it can be a social time tracking where everyone can see everyone else’s entries. And tagging makes seeing other peoples’ entries useful for owners, managers, and employees alike.

Stack Overflow, on the other hand, is geared mainly towards programmers who want to cultivate knowledge. It’s also geared towards google searchers, but they’re not the core group, and don’t get the benefits available to the core group, like reduced advertising and moderation abilities.

For a counter example, there are mass-email companies that sell their servers to spammers. And there are ones that avoid spammers, but have a hard time achieving separation just by how they define their target market (and saying people who want to send mass email minus the spammers doesn’t work well). But Campaign Monitor circumvents that problem by marketing to designers, who are an intermediary between them and a larger group which includes spammers, that keeps most of the spammers out.

I think defining a good customer base is a large part of having a good customer base.

I like seeing URLs, whether in the link text or by hovering over a link. They are often truer to their content than what the person adding the link writes. I always have the status bar turned on in my browsers. It annoys me that there is no status bar on the iPad. It pleases me that Chrome shows URLs without taking up space for a status bar, by fading them in when hovering over a link. But a significant percentage of URLs aren’t worth a whole lot. They are either too long, or contain nothing identifying but numbers.

Some say that URLs will go away. But do they have to? If people still care about them, I don’t think so. That’s why we need URLs worth caring about. Just like we need buildings worth caring about, and neighborhoods worth caring about:

So how ’bout it?

(Yes, I realize that this blog’s urls suck, and don’t have to. I plan to change that soon.)

I had some issues upgrading to Django 1.2.1, and needed to roll Django back to 1.1.2. I searched for “Downgrading Django”, and didn’t find instructions, so now that I’ve figured it out, I’m posting instructions here.

First, if you’re using easy_install, I suggest switching to pip. It has more features, is better designed, and uses the same repositories, so it’s easy to upgrade. To install it, type sudo easy_install pip.

To install Django 1.1.2, type sudo pip install Django==1.1.2. When I ran this, pip automatically removed the newer version of Django, and the two glitches I was encountering with the admin interface went away. Once I figure out what caused the glitches, I’ll upgrade back to Django 1.2 so I can take advantage of its new features.

From time to time I’ve been recording voice memos. I find them to be valuable both when I record them, because I explore thoughts in a different way, and later, when I listen to them. It’s a convenient sort of diary for me. Here’s a snippet from a voice memo I recorded while I was driving from the Raleigh-Durham International Airport to the coast in a rental car, thinking about what I had just resolved to do, which was to learn things in depth, rather than just read web development news:

So, who shall I learn from? I guess, it doesn’t matter that much. What matters is that I learn, that I stick to learning, that I spend my time on real learning and that I don’t get lured into the kind of learning that isn’t that important. Now, it doesn’t help me to read Hacker News, and to poke around into every little thing that comes out…because, it’s rare that I learn about what comes out enough to use it on something, so what value am I getting out of it? Not much. And…it’s real easy to do that, because I’ve been lured into the idea that being a successful programmer is mostly about keeping up on learning everything new—not missing anything.

One poignant memory I have of not really learning something is when I read about rip on twitter. I went nuts about it, and I thought it was really cool that I had twitter and Hacker News so I could hear about it right away. Then, a couple of weeks later, I realized that I had done absolutely nothing besides read the article. I hadn’t even installed it. Meanwhile, in those same couple of weeks, I had spent maybe an hour a day reading articles I found on HN and twitter. I was really busy when I realized this, so I didn’t immediately go out and try it. Chances are, if I had any problems that could have been solved by rip, my knowledge of it would have got me nowhere. Because I hadn’t used it, I forgot almost everything in the article about it, except that rip is like virtualenv for ruby.

Now, a new one has surfaced: Node.js. I read some early articles about Node.js. I’ve downloaded it. I’ve even ran a couple of demos. But what I haven’t done is build anything with it. Now, everyone knows about Node.js, and many of the people who don’t pay attention to new developments like I did, know Node.js, while I don’t. What’s more, people who have been more proactive about learning and participating in open source projects, have people from other communities begging them to try Node.js. I’ve seen it happen. It seems that the information networks of those who genuinely participate in open source are fine without HN or twitter (though they may be enhanced by them).

I just found myself wishing that I hadn’t deleted some dead code, because I wanted to use the same technique I’d used earlier. Now, if someone else had been in my position, and refused to delete it, because they wanted to be able to look at it later, I would have suggested that they learn to use their version control system better. So what did I need to learn to do? That’s right: learn git better.

After googling, I found that with the -p flag, git log can be made to show patches. I ran git log -p admin.py, since I knew which file used to contain the code I wanted to look at. It brought up all the changes made to that file, including when I created it, in my pager. I searched for get_urls, because I wanted to find where I had set up a custom admin view in django. It came right up! I copied it and pasted it back into the file. Problem solved.

I’ve always had the type of job where I’m given a set of responsibilities and some tasks. Many of these tasks are things that have a quick-and-dirty way of doing it, and could be expanded to just about anything.

Clients haven’t always been the best at communicating which tasks are most important. Often, I don’t think they know which are most important.

Sometimes, I have a hard time focusing. That’s why I’m trying a new strategy.

I’m going to try doing many of the tasks in a quick-and-dirty way and see what my clients ask me to improve. Just like a blogger or a twitterer might put out a bunch of posts, and if they get positive feedback, try doing similar posts to engage and expand their audience, I’m going to try doing a bunch of little projects to see what I get the most positive feedback from.

Now, like a blog/twitter audience, clients can be fickle, and lose interest in something they seemed really interested in, so I have to always be adjusting to demand, just like a successful blogger.

The information that helped gel this idea was in a marketing video by Giles Bowkett where he introduced the concept of Test-Driven Decision Making. That’s the concept I’m using here.

Note: After letting this sink in, and participating on the Year Of Hustle forums, I think I was just making excuses. When I’ve had success “prototyping”, I was doing it with the intent to ship. I could have released my idea and repurposed it later. I think shipping is the habit I should be trying to form, and that it will help me hone my skills better than prototyping without the intent to ship (i. e. dabbling).

Some ideas just aren’t ready to be shipped out. They need more time to germinate. Ideas that aren’t ready to be shipped, though, are often ready to be prototyped.

I had one idea where the plans for building it felt complete, but I wasn’t sure what my goal would be with the released product. I wanted to do more than impress people with pretty graphics. A few weeks later, while I wasn’t thinking about the idea, I thought of a goal, and while I was trying to think of a way to achieve the goal, I realized my idea would be a great way to achieve it. I also came up with another feature for it, that I think will help it stand out.

It would be nice if I’d already prototyped it, and had it sitting in a private repo on GitHub. There was nothing stopping me, except I was hung up on making things I could show people. If, at the time I knew what I wanted to write, I’d shipped other smaller things recently, I might not have run into this stumbling block.

this is Roy... Roy the killbot by Don Solo

I’m not sure whether it’s shipping that makes prototyping work, or the other way around, but I think developers ought to have a steady diet of both.

My friend Andrew Hyde recently launched a really cool service for freelancers called pick.im (pronounced (pick ’em). Here’s what it looks like in Mobile Safari on the iPad:

It searches based on three things: location, type of service, and budget. Yet one of them doesn’t show up in the form: the location. The location is auto-detected using either HTML 5 or the ip address. If it gets the location wrong, it can be changed on the results page.

What I noticed is that people can try out the site without ever pulling up the keyboard. The only reasons to pull up the keyboard are to change the location, to contact a freelancer, or to sign up to be listed as a freelancer. Many won’t start using the site until at least the second time they visit it, so the first time they visit the site, it will be without ever using the keyboard.

When I use my iPad, I like being able to accomplish many tasks without using a keyboard. Subtle changes to a design are sometimes needed for this to work in practice. Pick.im could show the guessed location before searching, but it doesn’t. If pick.im guessed Denver when I was in Boulder before searching, I might correct it, even though I would get reasonable results with Denver selected, due to its proximity to Boulder.

While I really dig the minimalistic design of pick.im‘s search, that’s not all there is to it. Andrew has big plans for it, including helping freelancers spend less time dealing with things like billing and accounting, and more time doing the kind of work they enjoy. For that, the interface will need to be more complex, but by the time people see that, they’ll have already started using it.

It’s nice to hear about new businesses starting up right here in my town. Boulder has a lot of exciting startups right now, and the summer hasn’t even started.

I added a new feature to grem: if you’re somewhere in your ~/github directory, you can simply type “grem” and the program will use launchy to open an appropriate page on GitHub in the default browser.

For example:

This is handy, because often I’m in someone’s repo and I wonder what else they worked on. All I have to do now is go up a directory or a few, and run “grem” with no arguments, and I’m there!

I chose to make it go to a repo, rather than a directory in the tree, when I’m in a subdirectory of a repo, for simplicity’s sake. This way, I don’t have to worry about when a directory has been added to a local copy of a repo but not the github repo.

What do you think? Is this command-line tool useful? Do you think my directory structure makes sense? Are there any other ways you can think of that I can take advantage of having a system for organizing my copies of repositories?

Meet grem. Short for gremlin, grem is a tool for managing local copies of github repos. Grem is also my first gem. It was remarkably easy to create a gem with newgem and push it to gemcutter.

gremlin (from Inti on flickr)

Right now, all grem does is clone a repository to a default directory. To try it out, run

sudo gem install grem

…and then:

grem benatkin grem

…and grem will clone my grem repo to ~/github/benatkin/grem, creating directories as needed.

I got this idea from a comment on my earlier post, Organizing github repos, by KevBurnsJr, where he suggested making a bash script for cloning repos into my directory structure. I decided to take it a step further by making a gem, and adding more features later.

By the way, standardizing on ~/github/username/reponame has worked wonders for me. It’s lowered the friction for downloading repos, looking at them, and coming back to them later. This script should lower the friction even more.

Speaking of lowering friction, with Heroku‘s help, I deployed a mini-app. Check it out at http://last7.heroku.com/ (github).