The Cloud – Elasticity

bungees

This article is part of a series on the defining value propositions of cloud computing platforms. The whole series includes:

The Stretchy Cloud

Another core tenet of cloud computing is its inherent elasticity, i.e. the ability to consume more or less resources on-demand, and through a pay-per-usage model.

Elasticity is a critical enough differentiator for cloud computing that Amazon even named their IaaS offering “EC2”, for “Elastic Compute Cloud”.

Autoscaling

There are two ways to take advantage of elastic computing resources.

  1. Exploit the ability to rapidly provision new resources whenever you notice demand getting high, expect an increase in capacity needs, or have a one-off task that would benefit from increased horsepower.
  2. Get a computer to do #1 for you based on some predetermined thresholds. For example, CPU load getting above 80% on your current servers.

Amazon Web Services, Rackspace, Microsoft Azure and others all offer #2, known as autoscaling.

I’ve found the real usefulness of autoscaling to be limited, and know very few companies that do it in practice. A now ancient (2008!) article by George Reese sums up some of the arguments against autoscaling, which still ring true.

If you have very specific application services that have predictable load patterns, or if you have so many servers that it’s worth a few engineers worth of salary to manage the complexities and assumptions of an autoscaling cluster, go for it. Until then, stick to scaling your infrastructure on the fly, but manually, to take advantage of your elastic infrastructure.

An Underutilized Feature

The elastic nature of the cloud may be its single most under-utilized feature. Most companies move to the cloud to save on infrastructure costs or to take advantage of managed services. And many use the elastic nature of the cloud to easily provision servers. But disappointingly few automatically scale their resources up and down to meet demand. something I suspect will change as the tooling for cloud platforms matures.

The best dollar value for cloud computing comes when you can exploit the elasticity to only pay for resources you need, when you need them. You win because you aren’t paying for what you don’t need, and IaaS providers win because they aren rent the same resources to someone else during the hours you aren’t using them.

Case Study: NYTimes

Despite being ignore by many tech companies, who just want the cloud because it’s a low-cost way to deploy an application, there are still hundreds (maybe thousands?) of companies that are making liberal use of the elasticity of Amazon and other big cloud providers.

The New York Times technology team has long been at the cutting edge of both front-end and architectural strategies. When it came time for them to put all their 11 million public domain articles online as PDFs, they took advantage of EC2 by spinning up 100 machines for 24 hours to crunch all the data. Let’s say they used m1.large instance types (just guessing), at a cost of $0.24/hour. That’s only $576 for 100 instances to run a whole day. A veritable super-computer for the cost of a low-end iPad!

I went to a talk a number of years back where the New York Times discussed the process of updating data on election night as poll results get uploaded to the Associated Press FTP server for consumption by all the media outlets. New poll results flow in once per hour, and time is of the essence when reporting breaking election results to your readers. So the NYT would spin up lots of cloud instances on election night to parallelize the process of slurping down election results every hour and updating them on their site. This was part of a much larger strategy for serving high-traffic on election night, all made possible by the elasticity of the cloud.

Case Study: You

Take a look at how your business uses infrastructure, and ask two questions.

  1. How variable is my utilization? How much money could I save if I turned machines on and off in-sync with demand?
  2. What cool stuff could I do if cost was no barrier to having hundreds of machines at my disposal for quick bursts? Is there a way to leapfrog my competitors by throwing computing horsepower at the problem?

Keep in mind that it’ll take a bunch of work for your ops team to figure out how to best exploit elasticity on your particular deployment. Make sure it’s a big enough lever for you before you send them on a wild goose chase. But know that the capability is there, and is one of the great benefits of the Cloud.

Building a Simple Olympic Medals API

olympic-rings-fail

I’m shamelessly excited about the upcoming olympic games. I’m a sucker for both the competition and the cheesy human-interest stories…. I thought the games would make a good excuse to show how a simple API can be built and launched from scratch with modern tools.

Put on your propeller beanie and let’s take a gentle geeky look at how I built it.

Olympics Medals API

The project was to launch an web-based API that returns JSON data on the current medal count for the Sochi 2014 games. In plain english, that means a URL:

which returns raw data that can easily be consumed by another computer program.

Why would we want this? This is very similar to almost every API that powers mobile apps today. Most iPhone and Android applications are constantly visiting URLs like this to get the data they need to update views in response to user input, loading a new screen, etc. These things power nearly every interaction you do on mobile, and a good chunk of the web too.

In our specific case, we return JSON text as seen below, with the latest medals counts for all the olympic countries. You’ll get the full data if you click the link above.[1]

[
    {
      "country_id": "united-states",
      "country_name": "United States",
      "rank": 1,
      "gold_count": 12,
      "silver_count": 14,
      "bronze_count": 6,
      "medal_count": 32
    },
    {
      "country_id": "germany",
      "country_name": "Germany",
      "rank": 2,
      "gold_count": 8,
      "silver_count": 16,
      "bronze_count": 1,
      "medal_count": 25
    },
    and so on, for all 94 countries represented.
]  

There’s also another URL for retrieving the medal counts for a particular country:

That one returns a very little bit of text:

{
    "country_name": "United States",
    "rank": 1,
    "gold_count": 12,
    "silver_count": 14,
    "bronze_count": 6,
    "medal_count": 32
}  

Getting the Data

This app was a fun reason to try out a newly launched tool called Kimono. They offer a service which scrapes structured data off web pages for you. I created a Kimono scraper in only a few clicks which retrieves the raw data directly from Sochi2014.com. Wouldn’t have been hard to do myself, but developers love shortcuts wherever we can find them.

It’s worth noting here that my API is a wrapper for a Kimono API, which is scraping the official Sochi website, which is displaying raw data from the International Olympic Committee medal standings API. These kinds of services-built-on-services are what makes the modern web so exciting and powerful, while simultaneously confusing and often fragile. If I were building a real production-quality API for olympic medal standings, I’d almost certainly try to license the raw data source to make my app faster and more reliable. But this approach will work for our purposes, and allowed me to get the whole API built and deployed in only a couple hours.

Building the App

I chose the lightweight Ruby Padrino framework for this app. It doesn’t have as many advanced features and support as something like Ruby on Rails, but it’s fast and easy to work with for a tight small project that doesn’t need a fancy front-end or even a database (though you can do all that with Padrino too).

You can find all the source code for this application open-sourced on GitHub. If you haven’t poked around at an app like this before, indulge yourself, and go take a look at just three files:

  1. The main application file shows three simple URLs. Our two API endpoints, and the root, which redirects to our documentation.
  2. The MedalData class which does the work of grabbing the raw data and arranging it to match what we return via JSON.
  3. A simple automated test for MedalData that makes sure future changes to my code or the Kimono scraper don’t break the behavior I’m expecting. This is a great example of how simple an automated test can be.

All the rest of the files in the project are just decoration, configuration, documentation, the boilerplate plumbing that Ruby and Padrino require to do the work. Not that hard, right?

Documenting the API

Developer tool Apiary maintains an open standard for documenting APIs like this one, called the API Blueprint.

I wrote up a similar description as above, but in their specified format, which is shown when a user visits http://olympics.clearlytech.com/.

Simple documentation like this goes a really long way towards convincing others to consume your API. Developers love this stuff.

Deploying It To The World

I decided to launch it on the mind-bogglingly easy Heroku platform. I created a new app, ran some git commands (Heroku manages your code by using the git source control tool that your developers are probably using anyway), and voilà! Instant public application.

Technically, the Heroku app runs at http://olympics-api.herokuapp.com/, but I told it to answer to http://olympics.clearlytech.com/ as well, by putting an entry in my DNS zone, managed by Amazon Route53. This may seem like a lot of moving parts, but wiring this kind of thing up is second-nature stuff to any full-stack developer worth her salt.

The whole process of setting this up on Heroku (including signing up for the service, setting up the app, deploying it, and changing my DNS) took about 10 minutes. There isn’t a faster way right now to deploy a low-volume application for public consumption.


  1. The code at the raw URL is not nicely formatted like our example, but another piece of code consuming this service doesn’t care how pretty it looks.  ↩

The Cloud – Infrastructure As A Service

cloud-ethernet

This article is part of a series on the defining value propositions of cloud computing platforms. The whole series includes:

Infrastructure As A Service

The main value proposition usually associated with the cloud is that it replaces the need for purchasing, racking, and managing your own computers, storage, and networking hardware in a datacenter. This is far from the only benefit of the cloud, but it’s certainly a major one. Amazon CTO Werner Vogels describes it nicely:

The cloud lets its users focus on delivering differentiating business
value instead of wasting valuable resources on the undifferentiated
heavy lifting that makes up most of IT infrastructure.

The “undifferentiated heavy lifting” that Vogels describes includes things like: procurement of hardware; building/renting space, network, and cooling in specialized datacenters; physical machine security; hardware monitoring; network administration; and all the associated capital costs and bookkeeping.

The general model for IaaS is that you can click a few buttons in a web interface (or make a few API calls, if that’s how you roll), and a new computer boots up somewhere, just for you, an instant toy/tool, that acts just like a machine that you unboxed and racked yourself, except it’s powered, networked, and cooled, with an OS ready to go.

Let’s savor the appeal of having powerful thousand-dollar gadgets just appearing with the click of a button. Seriously, just savor that. It’s like geek Christmas, every day….. Man, do I the cloud.

The Dirty Truth of The Cloud

IaaS providers do their best to hide from you the dirty truth of the cloud: cloud machines are almost always virtual machines, not real ones. When you launch a new machine, it’s the server-grade equivalent of running Windows on your Mac with VirtualBox or Parallels. Major cloud providers use what’s called a Hypervisor process, usually Xen or VMWare ESX to run many logical (guest) servers on one physical hardware (host) box.

Underlying this simple idea is a lot of complexity. Sometimes the abstraction of “just like a regular machine” leaks out in how you provision, manage, or monitor cloud machines. But in the past few years, as the startup world has moved rapidly to managed infrastructure, there are fewer and fewer issues with running your production system in a virtualized cloud environment.

If you want a managed infrastructure, but on dedicated hardware, you can either opt for something like EC2 Dedicated Instances, or you can move to a provider that allows you to select your hardware, but then manages it for you, like IBM’s SoftLayer

IaaS is Often Cheaper, Especially At First

Cloud Services providers are constantly promoting how much cheaper it is to run in the cloud. And it is. Sort of…. Like most cost calculations, the truth is it depends.

Capex becomes Opex

At first, there’s no doubt that the Cloud is cheaper. The capital expenditure for hardware and people to run it is a significant cost for any startup, especially in the modern era of lean startups (both financially and philosophically lean).

Swapping that capital expense for monthly operating costs allows a startup the low up-front cash, and the operating flexibility to modify infrastructure as necessary to meet the growing or (more often than we’d like to admit) shrinking needs of the startup.

The Total ROI Is Huge

The most common mistake I see founders making when evaluating the ROI of a cloud deployment, is that they stop at running the raw price calculations. And of course, they forget all the components involved with a robust infrastructure. They focus solely on the CPUs and datacenter cost, but skip all the networking gear, KVMs, spare parts, hard drive replacements, etc.

There’s little doubt that IaaS pricing is a win for the little guy (and the providers aren’t faring too badly either!). The providers get to buy in bulk, and rent you instance access at a reasonable markup. Given the rates a giant like Amazon gets on hardware and bandwidth, it’s quite likely that even with their markup, it still costs less than if you paid small startup prices yourself. Not to mention your likely preference for operating vs capital expense.

However, the real win comes in time-savings for your technical team. I can’t stress enough how much of a distraction it is to run your own infrastructure. Servers are not a set-it-and-forget-it kind of resource! Any savings you get by penny-pinching onto your own hardware (maybe not much savings at all) will quickly be made back when your developers aren’t woken up in the middle of the night to replace hardware or debug network issues.

Is Cloud IaaS Right For Me?

If you have to ask the question, then the answer is quite likely yes.

Most startups these days that decide to rack their own hardware know in advance that it’s a core competence (a pillar for your business). My friends at ObjectRocket launched to build the best hosted MongoDB solution available. They are systems and performance experts, and that’s what they were selling, so owning their entire hardware stack was crucial to maintain the control they needed. And they were willing to spend the capital up-front, knowing they could pass the long-term savings on to their customers. They did such a good job building their own infrastructure play that RackSpace acquired them very early for a tidy sum.

You are paying two kinds of overhead when you deploy to the cloud:

  1. A virtualized environment comes at a performance cost. It’s very efficient in terms of CPU and RAM performance (maybe a 2–5% hit vs. dedicated hardware), but shared network, shared disk I/O and other factors mean that you won’t have screaming fast hardware at your disposal. And that’s okay. Very few companies need that.
  2. There’s a man in the middle (your IaaS provider) marking up the hardware on you. If you need lots of hardware, and can get bulk rates when buying it, and have the capital to spend, simple economic theory suggests that cutting out the man in the middle will give you the most bang for your buck.

If you absolutely have to have the most hosting performance for your dollar, and you don’t require the flexibility to scale up and down very much, you might consider building it yourself. But if you are going to build your own infrastructure, remember that you need to budget for at least one (recommended three) full-time system and network admins. In my experience, you shouldn’t even start to think about building your own infrastructure until you know you’ll need at least 100 machines.

Reaching Your Users Via Email

mailboxes

Every day, we all receive dozens of emails from companies and services, promoting products, reminding us of appointments, alerting us to new social media content. Despite consuming all those emails, it’s not immediately obvious how to implement them for your own business, so here’s a primer.

Transactional Email

There are two major classes of email that you’ll send. The first is described as transactional email. These are emails like

  • “Thanks for signing up”
  • “You’ve requested a password reset”
  • “Your friend posted a new photo”
  • “Shipping on your product was delayed”

and so on. These emails share a few key traits of being

  1. highly personalized
  2. sent to a single recipient
  3. triggered through the user taking action, or an update on a pre-existing transaction the user has agreed to.

Usually, users can manage their email preferences[1] regarding which transactional emails they receive. But notably, these emails enjoy the distinction that they are exempt from most of the regulations of the CAN-SPAM Act[2]. That’s in contrast to the other class of email you send…

Promotional Email

These represent pretty much everything else receive, including:

  • Newsletters
  • Special offer
  • Coupons
  • New Product Announcements
  • “Please come back, we miss you”

These are typically sent by the marketing department, are always sent in bulk, and usually have little-to-no personalized content in them, although quite often the email is sent to a segment of the full mailing list (like, only people who’ve purchased a particular product, or people we haven’t seen in a while, or people from the South-west).

Five Steps To Sending

In order to actually get an email sent, there are five things that need to happen. Where and how they happen is at the heart of determining the best email tools strategy for your business.

  1. Determine the recipient list — that might be just the single user who is getting a transactional alert, or a whole segment of users about to get an announcement of a new fall product line
  2. Build the content — emails are generally sent with both a rich HTML version, and a plain-text version, these need to be crafted along with all the content
  3. Personalize the content — as simple as a mail-merge, or as complicated as an algorithmically-driven set of friend-recommendations, most emails will get a least a small personal touch for each recipient
  4. Queue the email — no matter how you are going to accomplish step 5, it’ll be slower and less reliable than steps 1–3, which means you’ll want to queue up mails to get sent out (you don’t want your whole user sign up process to fail just because there was an error sending the welcome email)
  5. Send the email — Eventually, there needs to be an SMTP Server somewhere which actually sends the email to the recipients’ mail server(s) of record

For transactional emails, you will typically do steps 1–3 with your own code and systems, and maybe step 4 too. A new user will sign up, you’ll have a hook in your code during signup which captures their information, builds the custom welcome message in HTML and plaintext, submits it to a background queue, which will moments later safely deliver it to an email service provider to handle sending.

For promotional emails, most of these steps will be with an email service provider, who manages your list, helps segment it, has templates for campaign content, does mail-merge-style replacements to lightly personalize the content, and takes care of queueing and sending the mail.

BUT… no matter which kind of email we’re talking…

Don’t Ever Send Your Own Mail!

These days, there is virtually no reason to be sending mail (step 5) yourself. Just don’t do it. If you are going to do it, then it better be a pillar of your business. If it does so happen that you are building an email marketing company, then you should be prepared to hire a full-time staff to deal with scale, deliverability, spam complaint handling, etc. Unless you know you need that, you don’t!

Just to belabor the point, what happens typically is that your development team is running a linux box with your web host. That linux box comes bundled with sendmail or qmail, and a boot service that runs the SMTP server, which by default accepts emails sent from the web server process running on the same machine. Fortunately, this is becoming less and less common, but even in 2013 I see teams that use this kind of set up in development, and it works “well enough” that it never gets revisited in the production environment. Don’t let your developers take this approach.[3]

Sending your own email without knowing deeply what you are doing is a guarantee of poor deliverability, and likely some security flaws and systems issues to boot. Use an Email Service Provider (ESP) in all cases. They are experts at getting email delivered, they have teams of people constantly looking out for your deliverability, email reputation, and as a bonus, they provide the value off of open and click-tracking on all your emails, transactional or promotional.

Tools For the Job

So that’s a lot of background info, when you really just want to know what email provider to use. From a practical perspective, you will want two separate Email Service Providers (ESPs); one for transactional emails and one for promotional emails. A few ESPs will tell you they can handle it all, but for now, that’s just a sales ruse, for two reasons:

  • The economics for sending customized 1-off emails (where the content is sent on-demand from the customer,[4] with an expectation that email will get to the recipient’s inbox within seconds) is very different than the economics for sending marketing emails (batches of almost identical emails, sent to a mailing list known in advance by the ESP, and using templates managed in their system, to be delivered any time without a window of a few hours for the whole batch). ESPs want to be able to price these products differently.
  • The use-case for these emails are very different. In one case, your application code will trigger emails, and as described above, you basically need a mail server in the cloud. The ESP’s customer is your development team. In the other case, your marketing team wants a great dashboard for managing campaigns, segmenting the mailing list, measuring the effectiveness of coupon campaigns, etc. The only thing these use-cases share in common is the final step of sending an email, otherwise they are fundamentally different products you are buying.

In general, you’ll be looking at prices that vary between $0.10 – $1.00 per thousand emails, depending on the volume you agree to, and the capabilities of the email providers. Big “enterprise-grade” providers can send much higher volume at higher speeds, and provide more elaborate means for building and tracking emails, but you’ll pay for it!

You can find our current recommendations for The Best Email Service Providers at ClearlyTech Recommends.


  1. On your site, or in your mobile app settings  ↩

  2. Which unfortunately, didn’t actually curb e-mail spam in the slightest. But it sure does make it a pain in the ass to be a law-abiding startup entrepreneur just trying to send emails to happy customers.  ↩

  3. Shame on you, developers!  ↩

  4. that’s you!  ↩

How the Web Page Grew Up

canstockphoto7116246

Chances are good that you are building, have built, or are someday going to build a web application.[1] So let’s take a high level look at what constitutes a modern web application, and how we got here.

Remember the 1990s? A time when gas was cheap, grunge was in, and people were still smarter than their mobile phones? Of course, it was also the decade in which the Web was born. Seemed like everyone had a “web page”, some people even had whole “web sites”, connected by “hyperlinks”. Ahhh, good times. Let’s take a look back at how web servers worked in those glory years…

Note: While the details have gotten dramatically more interesting (read: complicated) over the last decade, these basic principles still fundamentally power any web application you use or build today, so pay attention, dammit!

Request/Response is the Heartbeat of the Web

The core of the web is powered by a series of requests and responses. A request is most often made by your browser; it contacts an HTTP server, which is a just a program running on a machine[2] with a public IP address somewhere. The job of the HTTP server is to look at the URL you are requesting[3], and figure out what to send back to you. It acts much like a waiter, who brings your food requests to the kitchen, drink requests to the bar, and maybe the occasional request to the maître d’ or the valet. The HTTP server will process requests for a web page to one place, for an image to another place, and may even redirect your browser to an entirely different server to handle some requests.

Basic HTML

For those that forget the HTML they learned from a 700-page Teach yourself HTML in 21 Days book in 1996, here’s a quick reminder. This is basic stuff, but stick with me, and we’ll see how it evolves into the “real” web applications we build today.

Web pages are just boring text files.[4] What makes them special is that they have formatting tags defined by HTML right inline with the text. Here’s an over-simplified HTML text file:


 <h3>Favorite Quotes</h3>
 <strong>Seymour Cray</strong> famously said:
 <blockquote>“The trouble with programmers is that you can never tell what a programmer is doing until it’s too late.”</blockquote>
 That is <em>so true</em>, don’t you think?
</html>

All those bracketed characters are the HTML markup[5]. When you open this file in a web browser (go ahead, download it and open it up, you’ll see…), it looks like this:

The Web Server

Now we know what the browser is showing us, and we said that the job of the HTTP server is typically to return HTML in response to a browser request. So how does it do that, exactly?

In the simplest systems (welcome back to 1993), the HTTP server has the file in a directory tree on its local hard drive. For a corporate page with CEO Bob’s profile, the folders might look like this.

When your browser requests the URL http://www.example.com/about/team/bob.html, this simple web-server looks in the corresponding directories, finds that path on disk, and responds to the browser with the contents of the bob.html file.

When the browser gets the HTML, in addition to rendering it, it notices that there’s an image at http://www.example.com/about/team/bob-headshot.png, referenced in an HTML tag[6], so the browser issues a second request. The HTTP server again finds the requested file in the path corresponding to the URL, and returns the contents of the image file.

For many modern web sites, your browser will issue hundreds of requests to gather the information to display a single page. That’s the request/response heartbeat of the web at work.

Server-Generated HTML

Web authors had barely mastered all that stuff above, when inevitably we wanted more control over the web pages. We wanted to do things like

  • Show a page for every product in a store without having to write thousands of separate HTML files.
  • Display comments on an article in real-time without having to modify the article page HTML for each new comment.
  • Provide configurable messages for different target audiences to a site.
  • Show web-pages with real-time data (like the current weather).
and just about every other web behavior we now take for granted.

These are all accomplished through a great conceptual leap that transforms the web:

As long as the browser gets back valid HTML, it doesn’t care what the server did to get it. So the server doesn’t need to have HTML files on disk, it can create them, on the fly, using any programming tools it wants.

Read that one more time, because it’s easy to take for granted how transformative that premise is for the web as we know it today.

Adding a Timestamp to Our Page

Let’s see a simple example of how this might work. The PHP scripting language is a venerable[7] web language specifically designed to co-mingle code and plain HTML.

Here, we’ve modified the HTML page with a special new tag, and inside that tag, we’ve written PHP code that prints out the current date and time.

<html>
 <h3>Favorite Quotes</h3>
 <strong>Seymour Cray</strong> famously said:
 <blockquote>“The trouble with programmers is that you can never tell what a programmer is doing until it’s too late.”</blockquote>
 I was reading this page on
 <?php
 echo date(“Y-m-d H:i:s”, time());
 ?>
</html>

Unfortunately, it’s not enough to just put this on the HTTP server and run it like before. The server needs to do more than simply return the contents of that file on disk. We tell the HTTP server[8] that any file in our website directory that has a “.php” file extension should instead be passed to the PHP program. And the server responds to the browser with whatever that program prints out.

In order for this to work, the machine running your HTTP server has to have the PHP application installed. When the above HTML is opened by the the PHP program, it prints out the following contents.

<html>
 <h3>Favorite Quotes</h3>
 <strong>Seymour Cray</strong> famously said:
 <blockquote>“The trouble with programmers is that you can never tell what a programmer is doing until it’s too late.”</blockquote>
 I was reading this page on
 2013–10–01 12:42:47
</html>

which, when our HTTP server returns to the browser, it will happily display!

In this example, the program that’s getting run is actually that hybrid HTML/PHP file. The php program on the server knows how to identify the sections of that file written in code and it interprets those, replacing their contents with any text printed out by the code. It then returns the processed result to our HTTP server, who in turn returns it to the requesting browser.

Web Applications Today

While some web sites today are still delivered simply through stored HTML files, most are powered by lots of server-side code to generate HTML. Versions of our simple PHP example, but on steroids. And many (in fact, most of the ones you use on a daily basis) are built on huge platforms with hundreds of sophisticated moving parts. For example, when you do a simple Google search, you are getting back an HTML page that’s never existed before, and will never exist in that state again. It takes literally thousands of machines, working synchronously and in the background, to generate that page for you in a few milliseconds. Nevertheless, the basic process is the same: you make a request for a URL and the HTTP server you talk to does all the heavy lifting and returns a simple plaintext blob of HTML.

This is the great power and elegance of the Web. The simple contract between client browser and HTTP server, which is embodied in a URL request and HTML response, yields astounding flexibility and capability. It’s a warrant to be exploited by great web applications.


  1. I really mean almost everyone…it’s looking like this web thing isn’t a fad after all  ↩

  2. a computer similar to your desktop at home, but running in a rack in a datacenter most likely  ↩

  3. Plus some other data, like your browser settings, your language settings, information in your cookies, and more.  ↩

  4. Just plain old text files, like the ones made by TextEdit on your Mac or Notepad.exe on the Windows machine you had once upon a time.  ↩

  5. The files are “marked up”, like a copy editor marks up a first draft with proofreading symbols. Hence the “markup language” in HTML (Hyper Text Markup Language)  ↩

  6. Looks like <img src=”http://www.example.com/about/team/bob-headshot.png” height=”320″ width=”240″/>  ↩

  7. Yeah, that’s code for “old and crusty and not cool any more”, but it’s still alive and kicking, so don’t be a hater.  ↩

  8. Each HTTP server has configuration files that it loads to determine behaviors like which programs to run to generate code for which URLs  ↩