During my first few years working in software, I had the opportunity to work with a mentor who had the best approach to introducing new ideas that I’ve seen. The best way I can sum up the approach would be “Show, don’t tell. And don’t be a jerk.”
To give some more context, we worked in an organization where change was widely viewed as something to be feared, rather than something to be embraced. Ideas for change were often met with resistance, especially by management.
The primary way that he worked his magic was through small experiments – with the bulk of the work done on his own time. If there was something that he viewed needed to be changed, he would take a small slice of that problem, and apply the new idea to it. It might not solve the whole problem, but it showed the path toward it. For example, if we had a problem with triaging production issues from messy logs, he might take a crack at changing the styling of alert emails that got sent out to be cleaner (this was before Splunk and other tools). But not everywhere, maybe just in one place on just one of our many applications.
Then, he’d show it to the team to see what they thought. He did this without explicitly selling the idea – just stating the facts and showing off a working example. Seeing the idea actually working, the team would often embrace it. We also had a working template to start from, should we decide to pursue it. This was a seed that he planted, and if the team decided to nurture it, that idea would grow a life of it’s own with just a little bit of initial effort.
What didn’t happen? Talking about the idea for change before starting. You didn’t hear “I think we need to change X to Y because…” followed by an hour of debate.
I’m guilty of doing this, and it doesn’t work. Talking about an idea for significant change, without action, dooms it from the start. Imaginations run wild and what-if scenarios scurry about like frightened mice. All the time wasted pontificating could be spent doing a small experiment to see if the idea really works.
If you do a change experiment simply, and test it cheaply, chances are you have a lot less to lose than a meeting where the whole team talks for an hour. Paradoxically: if the idea is complex and hard to start on, sometims the best, cheapest experiment to run is a thought experiment and asking people what they think. You have to use good judgement, but avoid the dangers of the hour long debate!
We also can’t forget that people sometimes are apprehensive of change in a team. They’re comfortable working the way they already work. Why would they change? But if you have visible progress on that something works, instead of just words, it’s hard to be afraid of or argue with. Also, don’t forget to hear them out. Why are they resistant? Simply letting people talk through their feelings goes a long way to helping both you and them understand the new idea. They may have very good reasons for feeling as they do.
You have to be prepared for this strategy to fail. You will have ideas that are great and your team just isn’t ready for. Or, you will have terrible ideas that are rejected for good reasons. You cannot get defensive if these experiments do not pan out. You cannot under any circumstances get upset if those ideas are not embraced immediately. If they aren’t, try again, softly, with a slightly different experiment.
Many of us in software work with Agile processes, quickly iterating over software features until we hit on what our users want. Think of changing an organization the same way. If it doesn’t work the first time, what can you do better? Was it a problem with your idea, or was it just not communicated successfully? Was your example too small, so that it didn’t really present the power of your idea?
Above all: Don’t be a jerk. Lasting change on a team requires buyin, and that doesn’t happen if you’re not empathetic.
So go out there, do good work and plant some seeds for change in your team.
Shortly after, I came across “Why I Struggle With Node” by Graham Cox. It’s a great post about why he loves Node.js, but gets frustrated with its ecosystem and prefers that of Java.
The post made me think of my conversation with my friend. I linked to Graham’s post as a starting point, added some additional thoughts and sent them to my friend to give him a place to start with Node again. After I wrote it up, I thought it might be useful to other Java developers who haven’t taken the dive into Node.js and would like to, but might be intimidated or not know where to start.
If you’re in that boat, I suggest you first read Graham’s post. It has a great list of tools to start with that translate well into front end code too.
After you read that post, here are my notes and addons that I think might be helpful if you are coming into Node development from Java in the beginning of 2017:
Also, before you dive into creating a full app or start setting up tools, learn the Promise API and use it from the start in your code. It will make your code easier to reason about, and will make your life much more pleasant once you get past the initial learning curve. If you can, use the Bluebird library for Promises. It has lots of great extras on top of the native implementation, and rumor has it that it’s even faster.
Like Graham, I use Grunt for building, too. I haven’t tried Gulp, I’m sure it’s great, but I’ve used Grunt for a while and have found it effective. Though, like Graham, I do miss the fact that Maven takes care of a lot of the gruntwork (ha ha!) for you. For my team’s apps, we have separate Grunt tasks and configurations for building front end and back end code since folder structures are different.
This is mentioned in the post but I thought I’d provide what I’ve used: Istanbul through Grunt. It’s nice and simple and does the job well.
Graham mentions that Node logging isn’t as nice as what he’s used to in Java. That’s true, but I’ve used Winston quite effectively in production apps.
Graham is right, debugging in Node is hard. I attribute some of that to it’s asynchronous nature, though.
I’ve used node-debug effectively as a debugger. It’s quite nice, but a real pain to setup. But, if I’m being honest again, usually I try to use Winston (mentioned above) to put some nice low-level logging statements in place so I can debug with those, then turn logging levels up and down as needed. Sometimes those low level statements help when you’d least expect it.
When I first started with Node, we were using a dependency injection framework in Node since my team was used to using Spring in Java. After a while, we just got frustrated with it and abandoned it.
Graham’s comments about integration testing being much easier in Java are spot on. I haven’t come across nearly as many drop in, in-memory replacements for connected systems like databases or message queues. This makes it tough to integration test. What I’ve done in the past is to create throwaway Continuous Integration database instances at the beginning of the tests and then clear them out at the end of the tests. It’s not that bad once you set it up once, but I yearn for the ease of using something like HSQL in Java.
Team Development Environments and Docker
My team has half Windows users and half Mac users. It makes for a challenging environment for everyone to setup their development environment in Node.
If you’re on a team with developers using multiple platforms, and deploy to Linux, what I recommend is to create a Dockerfile which defines your development environment. Then, use a Docker volume mount to put your source code inside the container. (Side note: Use an official Node.js Docker image as your base image). This lets you use normal Docker build/run commands to bootstrap your dev environment. The volume mount lets you dynamically update your code so you can still iterate fast, as if you were working on your local machine. It works well across platforms (no worrying about exceptions in Windows), and as a bonus, if you ever want to run your app in a Docker container, you’ve already got it more than halfway there. I’m hoping to post more about this in the future.
I’ve been developing in Node for about 3 years now, and I still really love the developer experience. Sure, I’m probably not using the latest and greatest tools that just came out, but this workflow has gotten me to a good place where I can be very productive and crank out some good quality code.
I hope some of these notes help you do the same.
What tools/techniques/libraries did I miss that were important for you when you came to Node from Java?
As a coder, it can be easy to get mired in details. Our job is to manage the details of a business in code. And with the best of intentions, we may even create more details for ourselves through our designs and technology choices.
Worrying about so many little things, it’s easy to lose sight of why we’re putting fingers to the keyboard in the first place. Asking a simple question: “Why?” can help you rise above all the details to see the forest through the trees and make better software.
When assigned a task or project, many times my first reaction is to think of which technology is most appropriate to complete the task, and then quickly after that, start formulating how I would code it.
The technology is the fun part! And it’s our job, right? Well, yes and yes, but immediately jumping into a technical solution is a guaranteed way to make things more complex than they need to be.
And while you’re making things more complex for you and your team, you might be hurting your customers by not delivering what they really need or want.
Next time you get a project, instead of opening up an editor, or Googling, or drawing a diagram, first ask “Why am I solving this problem?”
If you don’t know why, ask the person who gave you the task. If they don’t give you a satisfactory answer: ask more questions. If they still can’t answer your question, and you can feasibly do so without getting fired, ask them who else can tell you why.
“My boss told me to.” is never an acceptable answer on your part or anyone else’s.
What is a good answer? I don’t know, stop asking questions! Just kidding. But, it should probably address one or more of the following:
A specific use case. For example: C-Level executives need this new report to make decisions about budget next year. Or: All users need to be able to save their login credentials in a cookie so they can save time each time they access the app.
Ease of operations. For example: Formatting log messages in XYZ format will allow the application support team to parse them easier and identify causes of bugs in logs quicker and make our customers happier.
Speed or quality of changes. For example: Writing an automated acceptance test suite will help us react better to customer needs by getting features out faster.
But ultimately, a good answer to “Why?” is one that makes sense to you and isn’t simply “It’s my job to write code” or “My boss asked me to.”
It is staggering how a simple “Why?” can halt people in their tracks and cause them to change their decisions, often for the better.
Especially on a team of seasoned people who have been with an organization for a long time, it’s good for someone to keep asking “Why?”. Chances are, there’s a bunch of people in the room with differing opinions, and the question will get them out in the open. Sometimes, it might even turn out that the task shouldn’t be done at all. Which is good, that frees you up focus on more important things. At the very least, you’ll probably dig up some serious “gotchas” about your task in the process.
Your Leaders Are Human
When you’re looking for answers, be patient and remember that the product owner/architect/supervisor/bossman/leader you’re asking is busy and he is human, just like you. He has lots of stuff he needs to get done, and sometimes he make mistakes and has lapses of judgement. Your job is to consult with him to get things done for a business. You can’t consult with him if you don’t fully understand why you’ve been assigned work.
If you don’t understand why, or he didn’t explain: Consider it part of your job to ask more questions. If he is at all a decent leader, he will take the time to explain the value of the bug you’re fixing or the enhancement you’re doing.
If your leader’s answer ends up being “because my boss told me to” and you can’t get any further then, well, it might be time to find a new organization. Unless, of course, you’re up for the task of managing upward, which is probably a good topic for another post!
Reducing Accidental Complexity
Earlier, I mentioned that we sometimes create unintended details through our everyday choices of technology and design. These unintended details are generally referred to as accidental complexity. This is opposed to essential complexity, which is solving the “real problem.”
Accidental complexity is part of our job and we can’t avoid it. But we can mitigate how much of it we create! Asking “Why?” helps.
When you ask why you’re doing something, the answer of “Because technology X needs technology Y” should be an immediate red flag that someone doesn’t fully understand the business problem at hand.
Writing code to solve a problem that other code created, or to solve a shortcoming of the AnguReactBerJS framework is not providing any value to the customers of your software. You are probably just creating more accidental complexity instead of solving a real business problem.
Asking “Why?” will help you focus on the simplest way to solve a real business problem instead of simply fixing technology. You might be able to bring your solution up a level to eliminate a technical issue or shortcoming altogether.
Say you’re given the task to add a set of Automated Acceptance tests for your application. Do you know why you’ve been given that task?
Is it because we want to use SeleneCucumBybara, the latest, greatest testing framework? Probably not. Is it because we want to shield ourselves from creating bugs and allow us to be more flexible with the codebase so we can get features out quicker? Now you’re probably on to something.
Knowing “Why?” will help guide you through the immense pile of decisions you’re going to make when you start writing code. You’re probably going to make multiple decisions per line of code you write. You want your coding decisions to be made with the right values in mind. For instance, if you want to safeguard against a changing codebase, knowing this might help you target tests for the area of code which changes the most.
Knowing Where To Cut Corners
“Cutting Corners” is a phrase engineers hate, myself included. But in the reality of most business software development, a shipped product is better than a perfect product, so there’s going to be some paint dripped on the floor.
Asking “Why?” will help you know where it makes sense to spend less time polishing code and where a cut corner is acceptable.
For instance, if the answer to “Why?” is “We need to get this feature out fast so we have first mover advantage” (meaning time to market is important), maybe it’s more important to have a few solid end to end automated acceptance tests than it is to have good unit test coverage.
Or, if the answer is “This bug is causing customers to be charged $5 per order by mistake”, you probably want to spend more time writing tests for your code and less time polishing the front end.
Remember that the concepts of Technical Debt and Backlogs (in Agile) exist to help us make sense out of the imperfections we might create. These are ways for you to quantify your technical risk, track it, and hopefully address it later.
Does a problem happen often? Ask “Why?”. This will help you figure out if a problem is systemic. Fixing the cause of a systemic issue will save the team repeated effort which will allow everyone to focus on solving problems with more value.
If you’re new to software development, it might seem intimidating to ask such a simple question all the time. You might be afraid of seeming like you’re not good enough. But newbies are in a perfect spot to ask “Why?” A good team should have an understanding that a person new to software won’t know it all and needs some help. They should also come to realize that questions like these will help the whole team get a better understanding of what they’re doing and if it needs to change.
I think it can be harder for someone who has a lot of experience to ask a simple question like “Why?” all the time. With more years of experience come greater expectations on the part of others. This makes it more likely for a seasoned developer to perceive themselves, or possibly even be perceived, as being incompetent, even though they are anything but.
Regardless of your experience, it takes some courage to ask “Why?” Just keep on asking and feel dignified knowing that you’ll find better answers and do better work than those who don’t.
In one of the talks I attended, Jeremy Keith reminded us that though the web has become a complicated beast, it is still amazing in its simplest form. And we shouldn’t forget that.
This sentiment rang very true to me – It made me recall the pure excitement I felt back in the mid-1990’s, when I was 13 years old, putting my first web pages on the Internet. A kid like me could create something from nothing, and better yet, self-publish it and have it accessible by the whole world. Anyone could see my content!
Along this path, I forgot about the power of a browser rendering simple HTML and CSS. And perhaps more importantly, having that content be globally accessible in one simple place – a URL.
These ideas of were underscored by almost every other speaker at An Event Apart. They reminded us that in order to create the most useful web sites for a user, we need to forget about fancy layouts and CSS. Instead we should focus on content first and ask ourselves: “What is most important for the user?” We should make that content simple and easy to find, regardless of how advanced their device or web browser is. Then, get it captured in plain, semantically meaningful HTML markup. The rest will fall into place.
I write this with the hope that it will help me remember these ideas for the next web app I build.
On a given day, I’m interacting with at least 10 different Linux servers. Whenever I used to shut my laptop, I got frustrated having to reopen ssh connections to every server I connected to. I also got myself into terminal tab overload, with a separate tab open for every server I connected to. This is no way to live!
Perhaps you interact with more. A lot more. Or even just more than one. If so, you’ve probably experienced similar frustration.
I found a pretty good solution to both of these issues using tmux and ssh. In this post, I’ll describe my workflow for working with my many Linux servers.
The overall idea is to pick a primary remote host to use as your “Home Base” and manage all your other connections through it. This could be referred to as a “bastion host“), though in this case you’re not using it for security, but convenience.
Then, use a Terminal Multiplexer to keep your all your SSH sessions open on that host. For this article, I will focus on tmux.
By using a multiplexer, you can keep your terminal sessions open perpetually on a remote host. If you get disconnected from your local developer machine, you can just connect back to your “primary” remote host, then pick your connection, and be on your way.
Your terminal output should even be saved and you can very easily switch between them without multiple tabs or windows open on your local development machine. Nice!
If you’ve used tmux before, this should be pretty straightforward. If you haven’t, consider this a nice way to get your feet wet getting started with it.
If you don’t have the ability to install packages, you can also check to see if you already have tmux (run which tmux) or (or GNU Screen (which screen). If you already have one of those, you’re golden!
If GNU Screen is your only option, this approach will still work, but I will only outline the detailed steps for tmux below.
Make sure you use a passphrase along with your key for extra security. There are even ways to cache your passphrase so you don’t need to type it all the time.
Setup SSH keys on your Home Base
You have some options here:
You can just reuse your same private key on your local developer machine and your Home Base host. This gives you the flexibility to connect to the same servers with your local developer machine or your Home Base in a pinch. But, it could be considered less secure.
If you want extra security, you can generate another ssh keypair for your home base server, in addition to the one you’ll use for your local developer machine. That way, you have an individual key for each machine you’ll be connecting with. The downside is more keys and passphrases to manage and remember.
Copy around your public keys
Now that you’ve got an SSH keypair (or keypairs), make sure that you copy your public key to your Home Base host, so you can easily ssh into that.
If you decide to re-use your private key, copy that over to your Home Base server. Otherwise, generate another keypair for your home base server.
This is how I copy around my public SSH key around:
Next, repeat those same steps to copy your public key to each and every server
you regularly connect to.
This step is probably the most tedious, but it will pay off in time savings within a day.
Setup SSH Configs
One snag that I hit early on with ssh was dropped connections due to inactivity. To keep connections
open, I set the ssh configuration option ServerAliveInterval on each host to 60 seconds. This will periodically
ping the server to keep the connection alive.
Here’s what you can put in your .ssh/config file in your Home Base to set this
for the server ‘myremoteserver1.yourdomain.com’:
You can also do this for all hosts by doing this:
Start spinning up sessions with tmux
If you’ve never used tmux before, reading about its features is a good idea. For now, I’ll just give the basics to get a good multiple server workflow going.
SSH into your Home Base host, and pick one of the servers you frequently connect to and start a tmux session
for it, you’ll type the following:
You’ll then see a nice green bar at the bottom of your screen telling you you’re in the comforting arms of a tmux session.
Now, ssh into one of your frequently access hosts in that session. Go ahead and play around a bit.
When you’re ready, create another tmux session for another host by opening a new tmux session with a different name related to that server.
You’ll do this by triggering the tmux “prefix” keybinding. This keybinding will put you into tmux’s control. By default, the tmux prefix keybinding is CTRL+b.
To start a new session, the key sequence you’ll start with is: CTRL+B :
That’s: Hold CTRL and B, followed by a colon (:).
You’ll now see a yellow bar at the bottom of your terminal you can type into.
Type the following (should look familiar!): new-session -s myremoteserver2
Now you’ll be put into a new tmux session named for your second remote server.
Then, SSH into your second server. Play around.
Repeat these steps for any frequently accessed servers.
Phew, that was a lot of work. So, what’s the big deal?
Well, now you can easily switch around between these different sessions to your heart’s content.
And they’ll stick around after your laptop drops Wifi!
You can do this by hitting the tmux prefix plus the “s” key. What that key combo would look like is: CTRL+B s
You should now see a menu with the two sessions you created. Select one of them with the arrow keys and press enter and you should be able to switch back and forth.
Since your sessions are still open, you’ll have all your buffer output still there from when you last left it. And
since your SSH won’t time out, you can keep these open as long as you need to.
Disconnect and try again
Let’s say you close your laptop and take a break. You connect back to wifi and
want to get back onto your remote hosts. If you ssh into your Home Base again, and type: tmux at
You will be put right back into those same tmux sessions (the at means “attach”). Everything preserved!
This will stay until you exit your sessions in tmux or tmux is terminated on your Home Base host.
Don’t forget to clean up your toys
Periodically, it’s important to go through your tmux sessions and prune them for servers you don’t need to actively be connected to. You can do this by again doing this tmux prefix and a colon: CTRL+B :
Then at the yellow prompt, type: kill-session -t sessionname
Where the session name is one of the ones you created earlier.
As I wrote this article, I realized this was really a big explanation of a tmux/multiplexer use case. This workflow is really scratching the surface of what tmux can do. If you find this way of working effective, I would encourage you to learn more about tmux.
As you learn more, you will see it is very configurable and powerful. People also share their tmux.conf files and you can learn a lot about what it can do that way, as well.
I have the comparative luxury of connecting to all these machines inside my organization’s firewall. I can’t speak for the security implications of leaving open SSH connections like this across the “wild” internet. I would love to see comments about that situation.
The first part of this post is in the form of a screenplay for an infomercial:
Narrator: “Do you work with computers in any capacity beyond Web Browsing, Microsoft Office and Gaming?”
Black and white footage of a software developer is shown. He is clicking madly and is extremely frustrated about having to rename 20 *.txt files in a directory that contain a bunch of Word documents.
Narrator: “Are you fed up with all that clicking when you need to do some repetitive task on your Mac or PC?”
Developer: “There’s gotta be a better way!”
The developer throws his hands up in frustration and pushes his laptop out a window. Stock footage of an explosion at the bottom of a canyon is shown.
Narrator: “Well don’t go crazy! You can save yourself tons of time by just learning a little bit about our old-fashioned computer friend: the command line!”
Footage of the software developer is now in brilliant color and shows him in front of a beautiful Macbook, with the Terminal open, smiling like a madman.
I started writing this post and it ended up sounding like an infomercial, so I just went with it. When my wife and I get frustrated with some kind of everyday annoyance, we always joke: “If this was an infomercial, we would be doing this in black and white and not color! There’s gotta be a better way!”
Am I rambling? What’s my point? I feel like working with the mouse for serious computer work is like seeing the world in black and white instead of beautiful, magical color.
I am inspired today to write this because, last night I read possibly the best explanation ever of why you need to stop using Graphical User Interfaces and start using the command line in Unix/Linux.
Is this content new? No. It is old, but it is as timeless as anything can be in the computer age. It is written for the average power user of a computer that doesn’t yet use the command line for his work.
I can’t possibly do a better job than Oliver trying to convince you why Linux and the command line is so powerful, so read the first three sections of that Wiki. Come back and let me know what you think.
If you’re not convinced, you should just throw your laptop out the window and stick to your smartphone and maybe find a job in woodworking.
For Linux power users
You’re probably (hopefully?) nodding your head in agreement with everything I said, and maybe didn’t read that article. You might think you know it all, but I would recommend taking a look. It will give you fresh eyes and a fresh mind. It will inspire you and remind you why you love Linux. You might learn something you didn’t already know, or, at the very least, save the link and send it to the next person at work who steadfastly refuses to stop click-click-clicking.
Also, please take a look at Oliver’s list of 100 Useful Unix Commands. You will know a lot of them, but some of them you won’t. And I bet you will learn something about some commands you already thought you knew as well.
My favorite was the “cd_func” under the pushd/popd section which allows you to track and navigate back through your visited directories. Awesome!
Then, I heard about it a lot. A LOT. Developer after developer, blog post after blog post. The book’s central idea is to automate (almost) everything in your software development processes. By doing this, the theory goes, you can deliver value to your software customers faster, with less risk.
There were many potential uses for these ideas coming up in my day-to-day work, so I finally decided to read the book to see what the hype was about. It was definitely a worthwhile read, and as a way to help me remember some key points from this book, I decided to write up this post. I thought it might spark some interest in others to read it as well.
Why read this book?
If you’re involved with software in any way: a developer, a sysadmin or a manager, you really should know this stuff.
Reading it cover to cover may not be necessary, which I’ll explain below, but you should at least be familiar with the concepts and techniques talked about in this book.
Even if you think you know the concepts, you should still read it. Admittedly, some of it is simply becoming standard knowledge and some of the tools it refers to are dated. But, the details in many of the chapters matter and will be really valuable for you.
Before you start reading…
Plan on reading about a chapter or less at a time, then stopping to digest it for a day or so. Because it’s written so well, it’s almost deceiving how dense the material is.
As I alluded to before, don’t feel like you need to read the whole book. This is mentioned in the book’s introduction. There are a few essential parts I’ll outline below, but otherwise, pick and choose as you see fit. The authors specifically mention that each chapter should stand on its own.
Here are the parts I consider must reads:
This sets the stage for why the book was written and the best way to read it (in the author’s opinion).
Chapter 1 – “The Problem of Delivering Software’
This chapter addresses fundamental problems the book sets out to solve and general principles it follows. It also will define certain key terminology that you’ll need to be familiar with throughout the book.
Again, this an especially important read for managers who may have been away from code for a while, or (gasp!) managers who have never coded before. It will help you understand how manual processes are fraught with errors, cause your developers to tear their hair out and generally make them (and by proxy: you) miserable.
Chapter 2 – “Configuration Management”
This was a standout chapter for me. Humble and Farley present why it’s important to be able to automate the creation of your software environments completely. Most of this stuff I was already familiar with, but the way this information is laid out is the best I’ve seen: Step by step, introducing the problems and high-level solutions. It helped me solidify my existing understanding and give more depth to what I already knew.
This is the type of information you need to convince others that automation of your environments is of utmost importance and also tell them that “Yes! It’s possible!”
If you’re less technical, let your eyes glaze over anything that you might not understand. You’ll still come away from the chapter with plenty of new ideas. Just be careful about how you present them to your developers if they’re not already on board :)
Stand out areas
Infrastructure Management (Chapter 11)
This was probably my favorite chapter in the whole book.
For a large, dense textbook, I found myself on the edge of my seat in this chapter. I could not get enough of the ideas that the book presented to solve common issues with software environments getting out of sync and how to prevent them. I lost count of how many times I thought:
“Wow, this would have helped with [insert huge, time draining issue here].”
There are also some important tips in this chapter regarding keeping your environments maintainable. Things like logging, change management and monitoring.
You could almost consider this chapter a crash course in good system administration. And it applies whether you’re running in the cloud, your own datacenter, with one server or thousands.
A key theme throughout the book is that you should use source control. I knew source control was important, but after reading the book, I realize how crucial it is to everything in software. Now, I want to Git all the things!!!
They make it clear they are not fans because feature branches deviate from the main line of code and thus stifle Continuous Integration across developers. I can see their argument, but having used feature branches very effectively, I have to wonder if their opinion has this changed since 2008 when the book was written. Things like pre-flight builds help with these sort of issues.
One of my key takeaways from the book (from Chapter 8) was that to effectively run Automated Acceptance Tests, you should build a test API layer into your application. This layer captures important use cases which you can use as building blocks for real tests. This is lightyears more maintainable that using UI driven tests, which the authors say should be used, but as little as possible.
This was eye opening and a very useful idea I hope to implement someday soon.
Also, they give good guidelines for how to categorize testing and when it makes sense to automate a test and when it doesn’t.
Chapter 12 – “Managing Data” also gives some really great tips on how to manage test data.
My Only Criticism: Not enough advice on where to start
My only criticism of this book was that it preached about the ideal state of Continuous Delivery a lot and didn’t spend enough time on how to get started if you’re in a rut already. Which I’m sure is where most readers are at.
I’m sure I’m not alone in that I desperately want to reach the God-like software state these guys describe, but I have some harsh realities of a large organization I need deal with before I can get there.
What follows is a list of some of the questions the book raised for me, but didn’t answer. My intent here is not to answer these questions, but to highlight some areas know I need to get more information on, and that you might, too.
I would love to hear your comments about any of these:
Differing levels of enthusiasm
If you work in a medium to large sized organization developing software, I’m sure you have to deal with a range of enthusiasm for the job. This ranges from older people who are completely content to run a deployment from a Word document, to overly enthusiastic cowboy developers who do what they want when they want, without the faintest whiff of a plan.
How do you herd these cats and get them drinking the automation Kool-Aid? How do you get people who aren’t excited, excited? And how do you contain the people that want to go crazy shaving yaks without a good, solid vision and plan to get there?
This is something I’ve been thinking about a lot since I read the book and have tried a few things out, but that’s for another post.
With individual contributors moving your digital widgets, a lack of enthusiasm is one thing. But if you have a lack of enthusiasm in management (especially upper management) this can present a serious roadblock to making progress towards these ideas. Or even worse: What if they’re opposed to the costs of implementing automation?
I’m still trying to find good ideas for convincing upper management to back these ideas (Without telling them to read a textbook…). Management wants numbers, and numbers for this stuff are hard to come by. It seems as though storytelling is more effective, but not everybody buys stories…
Furthermore, what about dealing with upper management who pushes too hard on these concepts without really understanding them? You know, the type who reads about DevOps in an in-flight magazine then lands and puts “Must implement DevOps by June” on your yearly objectives…
The book mentions, or alludes to, DevOps a fair bit.
The authors recommend having a unified team with development, operations and QA all working together in harmony. That’s great – at a startup or an organization supporting one large application.
But, what if you work in a large, siloed organization that supports hundreds of medium sized applications? How can you get these ideas to work?
Side note: DevOps is a buzzword I’m beginning to dislike. I appreciate what it’s after, but it seems to have been engulfed in meaninglessness by big companies wanting to make a buck.
How do you manage continuous delivery in a team of people who range from right out of college to senior developer. How do you get newbies started? How do you teach old dogs new tricks?
If you made it here, I hope you’re convinced you should read the book, or at very least add it to your Amazon Wishlist :) If you end up reading it after seeing this post, come on back and tell me what you think.
If you’ve read the book prior to reading this post, I’d love to hear your comments/criticisms/ideas in the comments.
After reading this book, if you’re not convinced that moving towards these concepts is worthwhile, you should probably find another profession.
My Christmas gift to myself this year was to install a new Solid State hard drive (SSD) in my old mid-2009 Macbook Pro. Wow, what a difference! The most honkin’ app I run, Adobe Lightroom, now takes about 5 seconds to load, compared with what felt like an eternity on my old hard drive.
This post outlines what I did to perform this upgrade, which, all in all, was pretty easy. I’m writing it down so it might help others and also so I don’t forget my own steps if anything goes wrong in the future :)
Why I decided to upgrade
My old Macbook has served me well and is in great shape. A new Macbook would be fantastic, but shelling out $200 versus over $2000 is a no brainer. It’s also maxed out on RAM (8GB), and it was running OS X 10.6 Snow Leopard – four major updates behind. So, it was really showing signs of age and I was missing out on some of the newer OS X features.
In addition to doing personal coding on my home machine in Node.js, Java and the odd bit of Ruby, I also use my Macbook for:
Using Adobe Lightroom to process photos
Using Photoshop Elements to touch up the odd photo
Using VMWare Fusion to run a Windows 7 Virtual Machine so I can run Quicken for personal finance
Side note, this is sad, isn’t it?
So, I do a few things in there that require some power beyond the piddly stuff.
After reading a lot of reviews for Solid State Drives for mac, I chose a Crucial M500 480GB Drive. It had the best combination of good reviews as well as price. I made sure to search the Amazon reviews for it to confirm it worked with my 2009 Macbook pro. (For my model, this was a particularly helpful review).
I had the drive in my Amazon shopping cart for a long time before getting it. I eventually setup a camelcamelcamel.com alert for it and got it about $40 cheaper than it usually is.
What I needed before installation
Besides the hard drive itself, I needed the following
External hard drive enclosure
I wanted to reuse my old hard drive. So, I found enclosure that would let me use it as an external USB drive. I went with a Sabrent enclosure.
This was doubly awesome because I could use this to just pop the old drive in this and convert all my data over using “USB 3.0” (see later on).
One drawback is that it sucks a lot of power (again, see later on).
A thumb drive 8GB or larger
Since I was going to install a new version of OS X on a new hard drive with no recovery partition, I needed to make an OS X installer thumb drive. To do this, I needed a thumb drive 8GB or larger.
Finally, I made sure I had my trusty set set of small screwdrivers (pentalobe, etc.) to use for the tiny screws in the macbook and the internal hard drive. Having this little kit around is indispensable if you’re going to do anything with a Macbook, iPhone or iPad.
Here’s how I did it
Find a bunch of free time
Having a 17-month old baby, I had to wait for a day when I would be able to “tend” to the install. While it’s not a lot of work, it does take a while and needs to be checked in on. Kind of like… A baby? No, that’s not right – a baby IS a lot of work! :)
Create a bootable USB installer
I decided to go straight to the latest OS X and use 10.10 (Yosemite) . To start this out, you have to go to the App Store and download the Yosemite installer. This is a few GB and takes a while to download, so I did that early on.
I had a feeling that VMWare Fusion might not play well between Snow Leopard and Yosemite, so, I backed up my important stuff from my Windows 7 image and stored it on my Mac drive. In my case, this was really just my Quicken files.
First things first: I backed up all the things! I use two different external hard drives to run my backups for extra redundancy, swapping the drives every couple of backups. So, I ran a time machine backup of my old hard drive on both of them for good measure.
It’s worth noting that this enclosure needed to be hooked up to a USB hub that had external power. My old macbook’s USB ports couldn’t power it.
Additionally, I realized that even though the enclosure supported USB 3.0, my old Macbook did not. Slower speeds, oh well!
Format the drive
I plugged in the bootable USB installer into the Macbook, held the Option key, and booted up.
I opened up Disk Utility and choose the new hard drive then formatted it.
Exit Disk Utility and then install Yosemite. It took about 45 minutes for mine to complete.
Once Yosemite was installed, I created a new user to test things out, making sure it had a different username than the one I used to use. Then, I opened Migration Assistant to move my data and settings over from the old drive to the fresh Yosemite install. You can also Migration Assistant from time machine backup on my existing external USB hard drive.
Install new XCode
I use Homebrew and after the upgrade, I was having some issues with various utilities. Running brew doctor gave me the hint I needed to upgrade Xcode from the app store. This took about 30 minutes to download and install.
I ran brew update and brew prune along with brew uninstall macvim then brew install macvim --override-system-vim. I was having some issues there.
Deal with the VM
As expected, VMWare Fusion didn’t work across OS X versions. Migration Assistant was very clear about that. I decided to give up on VMware Fusion and just try out using VirtualBox since it’s free.
This article gave a really good overview of how to convert my VM over from VMWare Fusion to VirtualBox.
I also installed the VirtualBox Guest Additions by using the “Devices -> Insert Guest Additions DVD Image” once my VM was up. Then I could share folders, resize the screen, etc.
Time machine backups
One thing I did not anticipate was that I would have issues with my existing Time Machine backups. When I went to run a new Time Machine backup, I was told I didn’t have enough space on my external hard drive.
It seems like Time Machine has not been able to “reconnect” to my old backups and is trying to basically back everything up again and doesn’t know that it should delete older copies of my backups. this article from pondini.org has some good information about how to reconnect backups if this happens.
As of now, I have not been able to solve this issue and may just reformat my external hard drive and start a fresh backup with Yosemite. If I get this fixed, I will update this post.
Update 1/10/2015: I ended up reformatting my backup drive and just started over. Using tmutil inheritbackup and tmutil associatedisk as mentioned in the pondini article and this article by Simon Heimlicher unfortunately didn’t work for me.
If you’re reading post and haven’t already closed it, you’re probably already convinced that it’s useful.
The purpose of this post is to help if you want to learn Vim because you see its promise, but just can’t seem to get past the hump of learning it and making it your primary editor. Or, perhaps you don’t know where to start.
In my opinion, and probably many others’, it is worth getting past the Vim hump, so hang in there! I’ll tell you how I went from IDE lover to Vim lover and hopefully provide some useful tips to get you there as well, if you want to.
I got my start developing in Java. So, what was my code editor of choice? An IDE, of course! First Eclipse, then IntelliJ IDEA.
As I moved along in my career, I saw presenters at meetups and conferences who used Vim. I watched in awe as they performed text gymnastics during their presentations. It was like they were playing an instrument, rather than coding.
Occasionally, in a fit of inspiration, I would give Vim a try. But I could never could get into the swing of things. It just seemed too hard to learn and I couldn’t get past how difficult it was to navigate around a project with a large set of files.
So, I puttered on, half-effectively using Vim on remote Linux servers where I quickly needed to view or edit a file without downloading it.
The moment of inspiration
Usually it went something like this:
Vim Ninja: (( tap tap tap )) – ((50 lines show up on screen in the exact spot he wanted))
Me: Wait! How the hell did you do that!? Show me!
Vim Ninja: Oh, that’s a macro, you just type ‘q’ then…
You get the idea.
I didn’t get any detailed training from him, per se, but I was able to find out about some useful plugins and configuration options that got me really excited to use it again.
So, I spent a handful of hours learning how to configure Vim and installed those plugins that the Vim Ninja recommended. After I finished, I told myself I was going to force myself to use Vim and only Vim for at least two weeks.
Then, shortly after I started, this happened:
I love this slide from SparkBox, because it is so true with lots of developer tools, but especially with Vim (and Git – perhaps the topic of another post).
If you’re comfortable where you’re at coding, when you decide to try a new tool like Vim, there is an immediate, sharp hurt you will feel. The hurt is usually accompanied by questioning the usefulness of the tool and the strong desire to give up.
You have to be determined to get through that hump, but when you do, you reach places you never thought possible.
How to learn to stop worrying and love the Vim
To appreciate Vim and take full advantage of it, you have to want to learn it, and need discipline. Force yourself to use it as your primary text editor for at least a few days or a week. Longer if you can. If you don’t, you will simply hit the “Ouch!” stage, get frustrated and never go back.
Where to begin?
For people in the same boat I was, or people who are brand spankin’ new to Vim, here are a few suggestions to get started on your Vim journey.
Install the right version
As of this writing, the latest version is 7.4 and that’s the one you should install to get all the latest good stuff.
On OS X
On OS X, Vim is installed by default, but chances are it’s an older version and you’ll want to get a newer one, and update it frequently.
To do this, I would recommend using Homebrew and installing Vim using that: brew install macvim --override-system-vim
You’ll also want to make sure that your fresh new Homebrew install of Vim is the default version you use when you type vim or macvim. The --override-system-vim flag should do most of this.
If you type which vim you should see /usr/local/bin/vim. If you hit issues, the default version of vim may be first in your $PATH and may need to be adjusted. This StackOverflow post is really helpful to get that configured correctly.
For Windows, the Vim installer will give you options to install a gui and console version of Vim to use.
Learn the basics
This post isn’t about how to do this, but there are tons of resources out there:
Or, if you don’t mind paying a bit, check out Upcase
Go whole hog
Don’t learn a bit of Vim, dip your toe in, then shift back to Sublime Text (without vintage mode on).
Make Vim your default editor, and open up all your text files in it. Resist any urge to open up anything else to type into, unless there is no other option! Do this for at least two or three weeks.
This is a bit like trying to get into an exercise regimen. It’s painful at first, but then gets easier and easier. Finally, you get to a point where you feel great about it and it starts to become more effortless.
Find a time when you’re not under tight deadlines
Under the gun to get features or fixes in at the end of a project? Not the time to be learning a new editor. Learning Vim will cause you to frequently pause, remember a key combination, and interrupt your flow temporarily.
Find a stretch of time at the beginning of a project, or second to that, in the middle of a project, where you have a bit of leeway to let your brain and your finger muscles get used to the shift in how your new editor works.
If you’re not under a lot of pressure, you’ll power through the “Ouch!” stage in a much easier fashion.
Learning how to really use visual mode effectively was my “a-ha” moment. This is why using a modal editor is great.
Learn the basics of shift+V, shift+v and about yanks and puts. Once you get the hang of these basics, you will be flying and the (text) world is your oyster.
Learn about the basics of a .vimrc file
A .vimrc file in your home directory is how you will configure Vim. And you’ll probably want to set some custom configurations right off the bat.
A great start is showing line numbers. (Why doesn’t Vim show them by default!?)
Just create a file called .vimrc in your home directory which contains this:
And start vim up. Viola, you’ll have line numbers all the time.
Then, go from there. There are tons of great resources out there on how to configure Vim. I would recommend finding some examples of .vimrc files and learning about the configurations you see in them. Everyone and their brother are starting to post their .vimrc and other dotfiles on Github.
My favorite resource is to look at ThoughtBot’s .vimrc file. They have a good default set of options and the comments do a pretty good job of explaining what certain things are.
Install Some Good Plugins
There are some Vim purists who think you should learn Vim without using plugins. I disagree. It is useful to know “pure” Vim in case you’re out on servers and don’t have any plugins. But doing any significant development in Vim without a few decent plugins is like using a rotary phone – it’ll get the job done, and it’s kinda hipster-retro-cool, but it ain’t fun.
First off, go get Vundle. Vundle is a Vim plugin manager that will make installing new Plugins and updating them a breeze.
Once you get it setup, you can just add lines like this to your .vimrc to add new plugins:
In this config, the text you put inside the quotes is a git repo for a Vim plugin. Then you can run :PluginInstall to install. You’re done.
A list of plugins that will come in handy regardless of your language
This allows you to type ctrl+p and a filename. It’ll do some fuzzy searching on your filesystem to find files that match. You can then navigate up and down the search results list and open a file in your buffer.
This is really similar to Eclipse’s ctrl+shift+r or IntelliJ’s ctrl+shift+n functionality.
Again, for you git users, this one is great to visually see added/deleted/changed lines in the left hand side of your buffer.
Java coders, things are lacking…
Simply put: I could not find anything that could really compare with the tight integration and IDE provides for a static language like Java.
There is real power of being able to navigate to static types and methods with a simple keyboard command or ctrl+mouse click. It’s also crucial for me to be able to do type-safe refactorings. Both of these things are built right into these IDE’s and are bound to keyboard shortcuts right out of the gate.
That’s not to say there aren’t options for Vim: I tried out Eclim which requires a headless version of Eclipse, but it was wonky to setup, and didn’t run very smoothly. It just didn’t feel worth it.
So, if you have to code in Java and only Java, you might have a tougher time really committing to Vim using the suggestions in this post.
If you learn a combination IDE shortcuts for navigating around files and combine those with Vim’s keybindings along with Visual Mode, you’re more than halfway there. That’s what I do, and I find it really effective.
If you’re new to Vim and this is your only option, just really force yourself to not turn the plugin off if you get frustrated! :)
Some other thoughts
If I had unlimited time, I would really love to create an open source project which packages up these essential Vim plugins and installs them with some default configurations. I spent a pretty good amount of time getting configurations together so Vim felt usable enough for me. It would help others get over the hump a lot quicker if there were pre-configured “packages” that could be installed. Maybe something like this already exists and I don’t know about it?
I’ve heard a lot of buzz recently about the (relatively) new language Go (aka Golang) created by Google. Notably, that the fantastic Docker was written in it.
I decided to try learning it recently, so thought I’d share my experience getting setup with a development environment on OS X. I’ve been doing a lot of Node.js development recently, and I come from a Java background, so it was a bit of a shift in what I’m used to setting up a development environment.
My intention with this post is not to teach Go fundamentals (see later on in the post for some resources), but rather to help you get your environment setup so you can start writing and running Go code quickly on your Mac.
Finally, at the end, if you’re still reading, I’ll share some of my thoughts about how my experience went and about the language itself.
Getting started using Homebrew
First off, I use Homebrew to install all my developer tools, so I pursued that route to get myself started as opposed to installing from source. I’ll document those steps below and note where they’re Homebrew specific, however, you can also install from the source
First, install Mercurial (using Homebrew)
Go needs this to install lots of common packages.
brew install hg
Second, install Go itself (using homebrew)
brew install go --cross-compile-common
You’ll get some information about exporting some directories to your $PATH variable. More on that below.
The cross-compile-common option will come in handy later if and when you want to try creating executables for OS’es other than OS X. This option will increase your install size, though. So, if you don’t plan on trying cross compilation, don’t worry about adding that flag.
Third, set the right environment variables
Go has a much different directory setup than I’m used to with something like Node.js, where you can simply put a project wherever you want on your filesystem and run a command.
Go expects you define where your main workspace is, and it will then install packages and compile binaries relative to that path. It does this by using an environment variable called $GOPATH.
Additionally, if you don’t install in the “standard” Go install location, you’ll need to tell Go where to find binaries. You can do this by setting the $GOROOT variable. If you installed with Homebrew, see the output it gives you during the install, but the install location is most likely something like this:
Finally, you’ll want to make sure anything you compile you can execute easily on your path. When you compile, Go will by default put your binary in the root of your Go workspace in the bin/ directory. You can use the $GOPATH variable to reference it.
Also, because the Homebrew output suggested it, I added the $GOROOT/bin location to my $PATH variable for good measure:
(Note that that last part may not be necessary given that Homebrew does a lot of this for you. I was a bit confused by the message Homebrew output during the install. If anybody can clarify, please leave a comment and I will update this post.)
Fourth, try installing the Go Tour as a test
It should be easy:
go get code.google.com/p/go-tour/gotour
This retrieves the source code into your $GOPATH/src directory under the right namespace directory.
Then, once that’s done: go install code.google.com/p/go-tour/gotour
This will actually compile the code for you in the $GOPATH/bin directory.
Hopefully, at that point, you’ve got no errors. If you don’t, you should be able to simply run the gotour command on the command prompt and it will spin up a webserver which runs the official Go tour on your local developer machine.
Fifth, go forth and learn!
Finally, grab a coffee, and try out some Go code!
I wouldn’t actually recommend learning Go from the “official tour” (see my thoughts at the end of this post). However, it is a good test to make sure your install is running well, which is why I note it above.
Also: Here’s a quick and easy command line trick to help speed up your learn / code / test process: Use logical operators to make sure you compile then run your program in a one liner:
go install code.google.com/p/go-tour/gotour && gotour
This will compile and execute your program assuming that the compilation is successful. This is just using the Go tour as an example. Rinse and repeat as needed for your own package and executable while you learn.
Sixth (optional): Try cross-compiling
One of the things I wanted to try out while I was learning was cross-compiling. I wanted to see how easy it was to code on my Mac and then build an executable for some of the Linux (RHEL) servers we have at my organization.
If you used the --cross-compile-common flag in the Homebrew install above, you should have many of the common OS and Architecture combinations that you can compile to.
You can cross compile by running a command like this:
GOOS=linux GOARCH=386 go install code.google.com/p/go-tour/gotour
This will compile the Go tour executable for Linux on a 386 architecture. It will put the binary in $GOPATH/bin/$GOOS_$GOOARCH. Whereas your normal compile without flags simply goes into $GOPATH/bin
(The “valid combinations” may not work unless it is one of the “common” ones that the Homebrew recipe installs for you. You can see which ones are installed in the recipe itself. You can also run the Homebrew install with --cross-compile-all and it will install all options for you, but it will take up a lot of space. This Coderwall article was very helpful in figuring this all out)
Finally, if you’re interested, here’s my thoughts on Go itself and my experience learning it.
Go has built in package management system from the get Go (ha, get it!?)
You can import packages directly from a Git repository (awesome!)
It’s pretty easy to cross compile, which is nice to create simple executables for different environments
Given that this is the first place people will likely start to learn the language, I thought that this tour did not explain concepts very well, leaving them to short code examples that took too long for me to grasp, given how short they were. Additionally, leaving math exercises to the user is not engaging or effective for me, and I’m assuming the same goes for many developers interested in using Go.
Making concurrent function calls is almost too easy
I was trying to make some HTTP GET requests with goroutines and very quickly learned that I would overwhelm the machine running it and it would fail with a “too many open files” error. Trying to throttle those concurrent requests to a manageable amount was not nearly as easy as creating all of them. The pattern I came across using semaphores seemed a bit convoluted for a language that’s supposed to make concurrency “easy.”
Still need native drivers to connect to Oracle
This is a biggie in the large organization where I work. We use Oracle heavily, and it would be great to have executables that were able to connect to Oracle without native Oracle drivers installed separately. Alas, this is nothing new, however, since in the Node.js and Ruby world, this is the standard as well. It’s not the end of the world, but using Oracle in Java is still a heck of a lot easier.