Wednesday, July 30, 2014

Thinking With Data by Max Shron, O'Reilly

I got this book expecting something to help me understand data science and maybe be able to converse intelligently with real data scientist at cocktail parties. Instead, it really just focuses on analytical thinking, things that most data scientist would be bored with. While I agree with other reviewers that it's well written and concise, it didn't tell me anything I didn't already know from monkeying with my own data sets in Excel. Overall, it's too simple for data scientists and not detailed enough for laymen.

Thursday, December 19, 2013

Building Polyfills by Brandon Satrom; O'Reilly Media

Building Polyfills is a fun book to read. It’s about how to contribute to the Polyfill community and build something that can make life easier for other JavaScript developers. Brandon Satrom the purpose and history of polyfills and then walks the reader through building a polyfill for HTML 5 Forms using Kendo UI. What makes this book interesting is its scope. This book covers the creation of a polyfill from picking a suitable subject to publishing it through Modenizr and Github. It covers how to decipher the W3C documentation, the basics of git, the merits of various popular JavaScript frameworks, and even experimental Polyfills that can enhance a developer’s toolkit. The book contains code that is both clear and relevant. In the process of learning how to build a Polyfill, the reader is introduced to Kendo UI. 

All of this information is great for JavaScript developers whether or not they’re interested in building their own Polyfills. The only drawback to this book may be that it has an expiration date. Eventually browsers will even themselves out and Polyfills will no longer be necessary. In the meantime, this is a good read for JavaScript Developers.

Tuesday, October 15, 2013

Visual Models for Software Requirements By Joy Beatty, Anthony Chen; O'Reilly Media

Visual Models for Software Requirements looks at the process of defining requirements from a creative perspective. The premise of the book is that business requirements are not two dimensional like the typical list of "shall" statements. They're layered and multi-faceted mirroring the complexity of business requirements. This book lays out a series of diagramming techniques to help discover all of the intricacies of system flows, business data, and process flows from the level of abstract business goals to individual UI elements.

For someone who is visually minded, these diagrams can be used to plan a system from "soup to nuts." This kind of model reminded me of design thinking, usability design, and system architecture combined. There was very little that I found new, but the way models combined to present a holistic picture is fantastic. It is also an easy to book to read or reference. Each type of diagram has its own chapter with templates, examples, and pitfalls to watch out for. That makes it easy to pick and choose which models best fit your solution and find all the information you need in a concise manner.

Overall, this is a great book to have on the shelf.

Monday, April 22, 2013

Lean UX by Jeff Gothelf; O'Reilly Media

Lean UX is an attempt to find the holy grail of agile user centered design. Jeff Gothelf applies principles of user centered (a.k.a UX) design to the agile software development lifecycle. The philosophy of constant user feedback are at the core of both UX and agile, but their rhythms don't necessarily harmonize. It is not easy to find time in a fast paced agile lifecycle to apply UX.

Jeff Gothelf manages to dovetail pure agile methodology with UX techniques like prototyping and user testing. Lean UX has a lot of great sections on the concepts of UX and agile. However, there is also plenty of concrete examples taken from the author's own experiences. 

Overall, Lean UX is an easy read and a good guide for meshing rapid development while maintaining focus on the end user. It's a well written book that is actually useful for project managers, systems engineers, and designers. There is nothing particularly new in this book, just old principles integrated with one another in new ways. Sometimes, that's all it takes to be truly innovative.

Wednesday, March 20, 2013

Resilience and Reliability on AWS By Jurg van Vliet, Flavia Paganelli, Jasper Geurtsen from O'Reilly

In an effort to pick up some more cloud knowledge, I recently volunteered to review Resilience and Reliability on AWS as part of the O'Reilly Blogger Review Program.
This book is the sequel to Programming Amazon EC2 by the same authors. It dives a little deeper into the infrastructure decisions that must be made when architecting an Amazon Web Services (AWS) application for maximum performance under all types of loads and catastrophic failures. The first couple of chapters are an introduction to AWS and a list of top ten "survival" tips for building AWS applications for resilience and reliability. Subsequent chapters explore how to integrate specific open source tools such as Postgres and MongoDB with AWS to maximize scalability, redundancy, and load balancing. Principles that drive the Infrastructure as a Service (IaaS) model.
The topic is an interesting and timely one. There are lots of books out now on cloud technologies, but this one has a narrow focus that could be of interest to someone building a large scale application using AWS that has high availability and scalability requirements. However, the authors didn't focus on that audience, but instead addressed a narrow topic at a broad audience. The result is schizophrenic.
The beginning chapters are almost too simplistic for an audience interested in an advanced topic such as this, and the subsequent chapters are too cryptic for those without an extensive prior knowledge of the subject to understand. For example, the chapter on top ten "survival" tips includes such gems as "embrace change" and "everything will break." Those tips aren't unique to AWS at all. Any developer who's been around the block enough to be interested in developing large scale applications using AWS should have plenty of scars from learning those lessons on more traditional development projects. On the other hand, the chapters that are solution specific assume too much knowledge of the subject. There are pages of Python code for a number of neat integration tricks, but very little explanation about how the code works. If a developer is familiar enough with the technology to read and understand a Python script for complex AWS integration tasks without any accompanying documentation, they don't need to be reading this book. The entire book is further confused by an awkward writing style. In some places it's written in first person singular and in others, first person plural. Transition and tone are all over the map making a complex subject even harder to follow.
Despite its flaws, this book does fill a niche. The information it contains could help to make better informed architecture decisions. For that systems architect, I would recommend this book with the caveat that you should skip the first few chapters and be prepared to read each subsequent chapter twice to understand it.

Saturday, February 23, 2013

What Good Are Demos?

I recently had an epiphany  Software demos are rarely worth the effort. There are few instances where the resources spent in preparation outweigh the risk of a spectacular public fail that demos entail.

I've participated in countless demos in my career to stakeholders big and small. I've conducted demos for the CEO of a large health care company. I've demo'd software to military officers and college professors. I even helped demo whitehouse.gov to the President of the United States. I've also demo'd software to librarians, call center workers, and lots of other front-line workers. All of these demos involved a rush to produce something worthy of demonstrating, rehearsal, and a lot of prayer. Some resulted in "attaboys," some resulted in complacent acceptance, and some resulted in complete system re-designs. No one has ever been so blown away by what I've shown that they immediately increased the project budget or gave me a hefty raise. Maybe that's the quality of my demos, but I doubt it.

So why do development projects subject themselves to extra work and stress to conduct demos? Here are a few reasons I've identified:

Collect Interface Feedback - It's a bad idea to use a demo in order to get user feedback on user interface. First, demos are staged and don't reflect anything in the real world. Often in a demo, the demonstrator is the developer who created the software and knows its every nuance. They know exactly what buttons to push and where to go to find what they need. The result is a polished demo that makes the software look easy to use. The demonstrator first shows the user how to use the software and then asks them if it looks easy to use. Of course it looks easy to use, they were just shown how to use it. That's not at all pertinent to whether or not someone who has never seen the software before will understand how to use it.

Collect Aesthetic Feedback - Demos should not solicit feedback on the look and feel of an application. Often, users will comment on the color scheme, layout, or graphic selection of the application. Those types of issues should be consistent with the brand and marketing of the application, not the personal tastes of users. The best way to test the aesthetics of a design is to conduct a branding test. Decide what color scheme you want to use and then ask a sampling of users to tell you what values those colors evoke in them. For example, if the application is handling sensitive information it should convey trust. Pink and purple would probably not be the best color choice. You can do the same type of test for the logos and graphics.

The other reason not to listen to feedback from users on aesthetics is that they aren't designers. When considering someone's opinion on a graphic design, the first thing I do is take note of what they're wearing. If they're telling me my colors aren't appropriate and they have on a Donald Duck novelty tie, their input doesn't count for much. Design is much more than having an eye for color and taste. A good design is well thought out and follows some basic tenets on the presentation of information such as use of white space, font choice, and layout. These are principles that are gained from experience and should not be subject to review from someone's inner artist.

Demonstrate Progress - This is about the only good reason for do a demo. Stakeholders need to see that there is some progress being made and there is no better way to demonstrate that than by showing off the product in action. This is a perfect scenario for your magic show. A good demo that wows stakeholders buys goodwill for the project and creates a positive buzz for the software. The problem is that you need a finished product to create that wow factor, and if the product is finished, it better be what the stakeholders expect. It's like at the end of a barbershop haircut. The barber always holds up a mirror at the back of your head for you to approve of the job he did. If you act shocked at what you see, that's very bad for the barber. There is no way to put that hair back. The worst comment he would want to hear would be something that might require a little extra trim. Similarly, there should be many touch points with stakeholders during the development process so that demos do not result in any surprises. It's important to have a clear understanding of expectations (requirements) up front with sanity checks along the way. If this is done, demos merely reassure stakeholders that you're making progress.

Overall, demos are not a good way to collect any kind of information from stakeholders. The only things they are good for is demonstrating progress. It's natural for a development team to show off the product of their work, but expectations should be set appropriately.

Saturday, August 25, 2012

Process: Is it Really a Dirty Word?

A common refrain I hear all the time in my role as a consultant is, "we don't have time for that." The "that" is usually process improvement, requirements gathering and review, or some form of testing. In other words, planning. Usually the people who tell me they don't have time, have some basis for this statement. In many cases they support a client whose requirements change constantly. If they put too much time into planning, the requirements they're planning for would change before they finished.

I get that. I was a developer in the trenches for 12 years. Most of that I spent in pretty loose environments where requirements gathering consisted of someone in leadership giving me an elevator speech about an idea they had. It was my job to turn that idea into reality. If I was lucky, I got to talk to a real user and see how they might actually use it.

Nowhere I've ever worked was as averse to process as the White House. I worked as a lead developer on whitehouse.gov from 2001 to 2005. Each day was a new priority, a new crisis, a new fire. We rarely acted, only reacted. Sometimes we had to create small applications with a few days to go from a good idea to deployment. Sometimes we had only a few hours to accomplish the same.

Typically, our team of six developers and designers would be told on one day that we needed to develop a site to highlight a hot issue or upcoming trip. We would have a few days to deploy it. In addition to design work, each project almost always necessitated code modifications to our content management system to accomodate a custom component. Our documentation consisted of an email from the White House Internet Director outlining the project. Testing was a quick run through of the site by the developer just prior to moving it to production.

We were very responsive and accomplished some pretty notable things pretty quickly, but we also got bitten by a lack of process on many occasions. When we did, our mistakes usually ended up in the Washington Post. Here's a few examples:

May 2000 - This incident predates my time at the White House, but was a legendary reminder to us of the consequences of silly mistakes. President Clinton planned a tour to promote an education initiative. To support the trip, whitehouse.gov included an interactive US map highlighting each stop. The developer finished the map late one day, it was quickly reviewed by the White House Communications Office and went live early the next day. One of President Clinton's stops included Owensboro Kentucky incorrectly placed on the interactive map in Tennessee. The Associated Press picked up the mistake and had a field day making fun of the irony of an education site that was weak on geography. The incident even offended some Kentuckians.

January 2001 - Again, this one predates my time at the White House, but not by much. Since President Clinton was the first President to have a web site, President Bush was the first President to have to build a new web site from the ground up. The contentious 2000 election and ensuing delay in deciding a winner meant that the White House web team had a matter of weeks to rebuild a new whitehouse.gov site reusing the boilerplate content such as Presidential biographies and tour information, while at the same time building a platform for communicating the President's agenda to the American public. The solution they decided on was a quick and dirty site to go up on inaugeration day, to be replaced several months later by a more polished site to last through the administration.
What they accomplished was absolutely remarkable . . . with one small glitch. When the site launched in a behind the scene frenzy on 20 January 2001, someone forget to remove placeholder text on the front page that said "insert something meaningful here." Of course Wired.com picked it up and instead of reporting on the remarkable transition of whitehouse.gov, the headline read, "Anybody Home at Whitehouse.gov?"

July 2003 - This next one I could write a book on. I had a front row seat for this project, but luckily managed to stay out of it. The White House receives thousands and thousands of emails per day. Way too many for anyone to sort through and at this time, there was no automated system (other than to scan for threats). The White House Communications Director decided he wanted to rectify this situation by building a system to respond to each email with a form letter addressing the senders concerns. However, building a system intelligent enough to discern that seemed a difficult task. I honestly don't remember the details of why this seemed so difficult, but I think budget and accuracy were concerns. 

The solution as designed required the sender to fill out a comment card like form describing the content of their message with a series of check boxes, drop downs and radio buttons. It also included a text box at the end that allowed them to make an open ended comment. The idea behind the form was to capture discreet fields in order for White House staffers to more easily mine the data. After filling out the form, the user would be emailed a form letter in pdf format based on their selections.

On paper, this makes total sense. The White House receives better information about how the American people feel from a database of discreet data points than they do from thousands of letters that they can't possibly read through. However, the human element got completely ignored. Even though most people instinctively know that the White House receives way too much email to read through each one, everyone who takes the time to email wants to believe that someone on the other end is thoughtfully reading their comments and considering them before responding. This system completely removed that illusion. Instead of feeling as though they were communicating a thoughtful point of view to their government, they felt like they were taking care of business at the DMV. People hated it.

If anyone at any point in the design process had thought to test this with actual users they would have immediately seen the problem. This concept was so flawed it, a simple paper prototype test would have sent up huge red flags, sending the whole project into redesign and saving the government money and the Bush administration embarrassment.


I realize as I close this blog entry that I have not implicated myself in any of these examples of process gone wrong. That's not my intention. Rest assured, I had plenty of my share of gaffes and errors that made it to whitehouse.gov due to hurried testing or poor planning. I've been screamed at by some cousin of a Rich Texan more times than I care to remember. However, nothing I ever did at the White House is as painful for me to admit as my involvement with Barney-cam.