Spent a decent portion of my professional life with init.d. Had to deploy a set of Ubuntu servers last week (use FreeBSD at home), which marked my first actual brush with systemd after a long while of sysadmin-ing Linux systems. It’s weird, takes some getting used to, and has a lovely Enterprise™ smell to it1, but I don’t think I mind it too much, especially with a nice cheatsheet. Just ergonomics; no comments on its security and stewardship 🤐
I wanted to know more about it’s history and enjoyed this really excellent talk by Benno Rice. Had no idea that its creator received death threats and various other forms of online abuse over an innocuous set of ideas and piece of software. Unbelievable.
Some select quotes from the talk and about systemd:
I imagine that init.d did too when it was introduced. ↩︎
Jira is middle-management-ware, a term I made up for software that serves the needs of middle management, or, at least, the needs middle management thinks it has, which comes to the same thing as long as you’re selling to them. (link)
JIRA makes it dangerously easy to implement overly bureaucratic processes. A certain kind of organization is drawn to it for that reason. Even a healthy organization switching to JIRA can get carried away with the tools now at its disposal. JIRA is a software product but also a social institution, an organizational philosophy. Sure, you can have the software without the attitude or vice versa, but use of JIRA is still a (weak) negative signal about the quality of an employer.
Turns out that the main thing protecting employee autonomy is the logistical difficulty of micromanagement. JIRA “solves” that problem. (link)
I know people who’ve worked there (none of whom are with the company, mercifully) and have heard nothing but fascinating tales of dysfunction, fiefdoms, sinecures, overwork, and bureaucracy. One engineer told me that, of all the bad places he’d worked at, he felt his “soul dying slowly” at Oracle. It’s a generic and very real Evil Corporation™, and probably the company the protagonists in Office Space work at.
And this wouldn’t be too far-fetched a thought. Consider that the producers of Terminator: Genisys, who are an Oracle Co-Founders’s own children, based “Cyberdyne Systems, the fictional defense company responsible for the creation of the evil AI Skynet” on their dad’s company. Amazing.
From an interview with Vincent Connare, creator of Comic Sans:
Q. What do you think of comic sans’ detractors?
A. I think most of them secretly like Comic Sans — or at least wish they had made it. Interesting fact: the main designer at Twitter tweeted that the most server space is used by complaints about: first, airlines; second, Comic Sans; and third, Justin Bieber. So not even The Bieber can beat Comic Sans!
Regular people who are not typographers or graphic designers choose Comic Sans because they like it, it’s as simple as that. Comic Sans isn’t complicated, it isn’t sophisticated, it isn’t the same old text typeface like in a newspaper. It’s just fun — and that ‘s why people like it.
“It’s like, ‘Not only am I going to refuse to submit these documents, but I’m going to use a typeface that doesn’t submit to the solemnity of the law, and Congress and public institutions,” said Michael Bierut, a partner at the design firm Pentagram. “Or maybe he just likes Comic Sans. It’s hard to say. Few typefaces are this freighted with public opinion.”
I think these are the final words on the matter from the creator himself:
If you love Comic Sans you don’t know much about typography. And if you hate Comic Sans you need a new hobby.
This is how I use the good parts of @awscloud, while filtering out all the distracting hype.
My background: I’ve been using AWS for 11 years — since before there was a console. I also worked inside AWS for 8 years (Nov 2010 - Feb 2019).
My experience is in web- sites/apps/services. From tiny personal projects to commercial apps running on 8,000 servers. If what you do is AI, ML, ETL, HPC, DBs, blockchain, or anything significantly different from web apps, what I’m writing here might not be relevant.
Step 1: Forget that all these things exist: Microservices, Lambda, API Gateway, Containers, Kubernetes, Docker.
Anything whose main value proposition is about “ability to scale” will likely trade off your “ability to be agile & survive”. That’s rarely a good trade off.
Start with a t3.nano EC2 instance, and do all your testing & staging on it. It only costs $3.80/mo.
Then before you launch, use something bigger for prod, maybe an m5.large (2 vCPU & 8 GB mem). It’s $70/mo and can easily serve 1 million page views per day.
1 million views is a lot. For example, getting on the front page of @newsycombinator will get you ~15-20K views. That’s just 2% of the capacity of an m5.large.
It might be tempting to use Lambda & API Gateway to save $70/mo, but then you’re going to have to write your software to fit a new immature abstraction and deal with all sorts of limits and constraints.
Basic stuff such as using a cache, debugging, or collecting telemetry/analytics data becomes significantly harder when you don’t have access to the server. But probably the biggest disadvantage is that it makes local development much harder.
And that’s the last thing you need. I can’t emphasize enough how important it is that you can easily start your entire application on your laptop, with one click.
With Lambda & API Gateway you’re going to be constantly battling your dev environment. Not worth it, IMO.
CloudFormation: Use it. But too much of it can also be a problem. First of all, there are some things that CFN can’t do. But more importantly, some things are best left out of CFN because it can do more harm than good.
The rule of 👍: If something is likely to be static, it’s a good candidate for CFN. Ex: VPCs, load balancers, build & deploy pipelines, IAM roles, etc. If something is likely to be modified over time, then using CFN will likely be a big headache. Ex: Autoscaling settings.
I like having a separate shell script to create things that CFN shouldn’t know about.
And for things that are hard/impossible to script, I just do them manually. Ex: Route 53 zones, ACM cert creation/validation, CloudTrail config, domain registration.
The test for whether your infra-as-code setup is good enough is whether you feel confident that you can tear down your stack & bring it up again in a few minutes without any mistakes. Spending an unbounded amount of time in pursuit of scripting everything is dumb.
Load balancers: You should probably use one even if you only have 1 instance. For $16/mo you get automatic TLS cert management, and that alone makes it worth it IMO. You just set it up once & forget about it. An ALB is probably what you’ll need, but NLB is good too.
Autoscaling: You won’t need it to spin instances up & down based on utilization. Unless your profit margins are as thin as Amazon’s, what you need instead is abundant capacity headroom. Permanently. Then you can sleep well at night — unlike Amazon’s oncall engineers 🤣
But Autoscaling is still useful. Think of it as a tool to help you spin up or replace instances according to a template. If you have a bad host, you can just terminate it and AS will replace it with an identical one (hopefully healthy) in a couple of minutes.
VPCs, Subnets, & Security Groups: These may look daunting, but they’re not that hard to grasp. You have no option but to use them, so it’s worth spending a day or two learning all there is about them. Learn through the console, but at the end set them up with CFN.
Route 53: Use it. It integrates nicely with the load balancers, and it does everything you need from a DNS service. I create hosted zones manually, but I set up A records via cfn. I also use Route 53 for .com domain registration.
CodeBuild/Deploy/Pipeline: This suite has a lot of rough edges and setup can be frustrating. But once you do set it up, the final result is simple and with few moving parts.
Don’t bother with CodeCommit though. Stick with GitHub.
Sample pipeline: A template for setting up an AWS environment from scratch.
S3: At 2.3 cents per GB/mo, don’t bother looking elsewhere for file storage. You can expect downloads of 90 MB/s per object and about a 50 ms first-byte latency. Use the default standard storage class unless you really know what you’re doing.
Database: Today, DynamoDB is an option you should consider. If you can live without “joins”, DDB is probably your best option for a database. With per-request pricing it’s both cheap and a truly zero burden solution. Remember to turn on point-in-time backups.
But if you want the query flexibility of SQL, I’d stick with RDS. Aurora is fascinating tech, and I’m really optimistic about it’s future, but it hasn’t passed the test of time yet. You’ll end up facing a ton of poorly documented issues with little community support.
CloudFront: I’d usually start without CloudFront. It’s one less thing to configure and worry about. But it’s something worth considering eventually, even just for the DDoS protection, if not for performance.
SQS: You likely won’t need it, and if you needed a message queue I’d consider something in-process first. But if you do have a good use case for it, SQS is solid, reliable, and reasonably straightforward to use.
Conclusion: I like to seperate interesting new tech from tech that has survived the test of time. EC2, S3, RDS, DDB, ELB, EBS, SQS definitely have. If you’re considering alternatives, there should be a strong compelling reason for losing all the benefits accrued over time.
Back in the good old days – the “Golden Era” of computers, it was easy to separate the men from the boys (sometimes called “Real Men” and “Quiche Eaters” in the literature). During this period, the Real Men were the ones that understood computer programming, and the Quiche Eaters were the ones that didn’t. A real computer programmer said things like “DO 10 I=1,10” and “ABEND” (they actually talked in capital letters, you understand), and the rest of the world said things like “computers are too complicated for me” and “I can’t relate to computers – they’re so impersonal”. (A previous work  points out that Real Men don’t “relate” to anything, and aren’t afraid of being impersonal.)
But, as usual, times change. We are faced today with a world in which little old ladies can get computerized microwave ovens, 12 year old kids can blow Real Men out of the water playing Asteroids and Pac-Man, and anyone can buy and even understand their very own Personal Computer. The Real Programmer is in danger of becoming extinct, of being replaced by high-school students with TRASH-80s!
There is a clear need to point out the differences between the typical high-school junior Pac-Man player and a Real Programmer. Understanding these differences will give these kids something to aspire to – a role model, a Father Figure. It will also help employers of Real Programmers to realize why it would be a mistake to replace the Real Programmers on their staff with 12 year old Pac-Man players (at a considerable salary savings).
The easiest way to tell a Real Programmer from the crowd is by the programming language he (or she) uses. Real Programmers use FORTRAN. Quiche Eaters use PASCAL. Nicklaus Wirth, the designer of PASCAL, was once asked, “How do you pronounce your name?”. He replied “You can either call me by name, pronouncing it ‘Veert’, or call me by value, ‘Worth’.” One can tell immediately from this comment that Nicklaus Wirth is a Quiche Eater. The only parameter passing mechanism endorsed by Real Programmers is call-by-value-return, as implemented in the IBM/370 FORTRAN G and H compilers. Real programmers don’t need abstract concepts to get their jobs done: they are perfectly happy with a keypunch, a FORTRAN IV compiler, and a beer.
Real Programmers do List Processing in FORTRAN.
Real Programmers do String Manipulation in FORTRAN.
Real Programmers do Accounting (if they do it at all) in FORTRAN.
Real Programmers do Artificial Intelligence programs in FORTRAN.
If you can’t do it in FORTRAN, do it in assembly language. If you can’t do it in assembly language, it isn’t worth doing.
Computer science academicians have gotten into the “structured programming” rut over the past several years. They claim that programs are more easily understood if the programmer uses some special language constructs and techniques. They don’t all agree on exactly which constructs, of course, and the examples they use to show their particular point of view invariably fit on a single page of some obscure journal or another – clearly not enough of an example to convince anyone. When I got out of school, I thought I was the best programmer in the world. I could write an unbeatable tic-tac-toe program, use five different computer languages, and create 1000 line programs that WORKED. (Really!) Then I got out into the Real World. My first task in the Real World was to read and understand a 200,000 line FORTRAN program, then speed it up by a factor of two. Any Real Programmer will tell you that all the Structured Coding in the world won’t help you solve a problem like that – it takes actual talent. Some quick observations on Real Programmers and Structured Programming:
Real Programmers aren’t afraid to use GOTOs.
Real Programmers can write five page long DO loops without getting confused.
Real Programmers enjoy Arithmetic IF statements because they make the code more interesting.
Real Programmers write self-modifying code, especially if it saves them 20 nanoseconds in the middle of a tight loop.
Programmers don’t need comments: the code is obvious.
Since FORTRAN doesn’t have a structured IF, REPEAT … UNTIL, or CASE statement, Real Programmers don’t have to worry about not using them. Besides, they can be simulated when necessary using assigned GOTOs.
Data structures have also gotten a lot of press lately. Abstract Data Types, Structures, Pointers, Lists, and Strings have become popular in certain circles. Wirth (the above-mentioned Quiche Eater) actually wrote an entire book  contending that you could write a program based on data structures, instead of the other way around. As all Real Programmers know, the only useful data structure is the array. Strings, lists, structures, sets – these are all special cases of arrays and and can be treated that way just as easily without messing up your programing language with all sorts of complications. The worst thing about fancy data types is that you have to declare them, and Real Programming Languages, as we all know, have implicit typing based on the first letter of the (six character) variable name.
What kind of operating system is used by a Real Programmer? CP/M? God forbid – CP/M, after all, is basically a toy operating system. Even little old ladies and grade school students can understand and use CP/M.
Unix is a lot more complicated of course – the typical Unix hacker never can remember what the PRINT command is called this week – but when it gets right down to it, Unix is a glorified video game. People don’t do Serious Work on Unix systems: they send jokes around the world on USENET and write adventure games and research papers.
No, your Real Programmer uses OS/370. A good programmer can find and understand the description of the IJK305I error he just got in his JCL manual. A great programmer can write JCL without referring to the manual at all. A truly outstanding programmer can find bugs buried in a 6 megabyte core dump without using a hex calculator. (I have actually seen this done.)
OS/370 is a truly remarkable operating system. It’s possible to destroy days of work with a single misplaced space, so alertness in the programming staff is encouraged. The best way to approach the system is through a keypunch. Some people claim there is a Time Sharing system that runs on OS/370, but after careful study I have come to the conclusion that they are mistaken.
What kind of tools does a Real Programmer use? In theory, a Real Programmer could run his programs by keying them into the front panel of the computer. Back in the days when computers had front panels, this was actually done occasionally. Your typical Real Programmer knew the entire bootstrap loader by memory in hex, and toggled it in whenever it got destroyed by his program. (Back then, memory was memory – it didn’t go away when the power went off. Today, memory either forgets things when you don’t want it to, or remembers things long after they’re better forgotten.) Legend has it that Seymour Cray, inventor of the Cray I supercomputer and most of Control Data’s computers, actually toggled the first operating system for the CDC7600 in on the front panel from memory when it was first powered on. Seymour, needless to say, is a Real Programmer.
One of my favorite Real Programmers was a systems programmer for Texas Instruments. One day, he got a long distance call from a user whose system had crashed in the middle of some important work. Jim was able to repair the damage over the phone, getting the user to toggle in disk I/O instructions at the front panel, repairing system tables in hex, reading register contents back over the phone. The moral of this story: while a Real Programmer usually includes a keypunch and lineprinter in his toolkit, he can get along with just a front panel and a telephone in emergencies.
In some companies, text editing no longer consists of ten engineers standing in line to use an 029 keypunch. In fact, the building I work in doesn’t contain a single keypunch. The Real Programmer in this situation has to do his work with a text editor program. Most systems supply several text editors to select from, and the Real Programmer must be careful to pick one that reflects his personal style. Many people believe that the best text editors in the world were written at Xerox Palo Alto Research Center for use on their Alto and Dorado computers . Unfortunately, no Real Programmer would ever use a computer whose operating system is called SmallTalk, and would certainly not talk to the computer with a mouse.
Some of the concepts in these Xerox editors have been incorporated into editors running on more reasonably named operating systems. EMACS and VI are probably the most well known of this class of editors. The problem with these editors is that Real Programmers consider “what you see is what you get” to be just as bad a concept in text editors as it is in women. No, the Real Programmer wants a “you asked for it, you got it” text editor – complicated, cryptic, powerful, unforgiving, dangerous. TECO, to be precise.
It has been observed that a TECO command sequence more closely resembles transmission line noise than readable text . One of the more entertaining games to play with TECO is to type your name in as a command line and try to guess what it does. Just about any possible typing error while talking with TECO will probably destroy your program, or even worse – introduce subtle and mysterious bugs in a once working subroutine.
For this reason, Real Programmers are reluctant to actually edit a program that is close to working. They find it much easier to just patch the binary object code directly, using a wonderful program called SUPERZAP (or its equivalent on non-IBM machines). This works so well that many working programs on IBM systems bear no relation to the original FORTRAN code. In many cases, the original source code is no longer available. When it comes time to fix a program like this, no manager would even think of sending anything less than a Real Programmer to do the job – no Quiche Eating structured programmer would even know where to start. This is called “job security”.
Some programming tools NOT used by Real Programmers:
FORTRAN preprocessors like MORTRAN and RATFOR. The Cuisinarts of programming – great for making Quiche. See comments above on structured programming.
Source language debuggers. Real Programmers can read core dumps.
Compilers with array bounds checking. They stifle creativity, destroy most of the interesting uses for EQUIVALENCE, and make it impossible to modify the operating system code with negative subscripts. Worst of all, bounds checking is inefficient.
Source code maintainance systems. A Real Programmer keeps his code locked up in a card file, because it implies that its owner cannot leave his important programs unguarded .
THE REAL PROGRAMMER AT WORK
Where does the typical Real Programmer work? What kind of programs are worthy of the efforts of so talented an individual? You can be sure that no real Programmer would be caught dead writing accounts-receivable programs in COBOL, or sorting mailing lists for People magazine. A Real Programmer wants tasks of earth-shaking importance (literally!):
Real Programmers work for Los Alamos National Laboratory, writing atomic bomb simulations to run on Cray I supercomputers.
Real Programmers work for the National Security Agency, decoding Russian transmissions.
It was largely due to the efforts of thousands of Real Programmers working for NASA that our boys got to the moon and back before the cosmonauts.
The computers in the Space Shuttle were programmed by Real Programmers.
Programmers are at work for Boeing designing the operating systems for cruise missiles.
Some of the most awesome Real Programmers of all work at the Jet Propulsion Laboratory in California. Many of them know the entire operating system of the Pioneer and Voyager spacecraft by heart. With a combination of large ground-based FORTRAN programs and small spacecraft-based assembly language programs, they can to do incredible feats of navigation and improvisation, such as hitting ten-kilometer wide windows at Saturn after six years in space, and repairing or bypassing damaged sensor platforms, radios, and batteries. Allegedly, one Real Programmer managed to tuck a pattern-matching program into a few hundred bytes of unused memory in a Voyager spacecraft that searched for, located, and photographed a new moon of Jupiter.
One plan for the upcoming Galileo spacecraft mission is to use a gravity assist trajectory past Mars on the way to Jupiter. This trajectory passes within 80 +/- 3 kilometers of the surface of Mars. Nobody is going to trust a PASCAL program (or PASCAL programmer) for navigation to these tolerances.
As you can tell, many of the world’s Real Programmers work for the U.S. Government, mainly the Defense Department. This is as it should be. Recently, however, a black cloud has formed on the Real Programmer horizon.
It seems that some highly placed Quiche Eaters at the Defense Department decided that all Defense programs should be written in some grand unified language called “ADA” (registered trademark, DoD). For a while, it seemed that ADA was destined to become a language that went against all the precepts of Real Programming – a language with structure, a language with data types, strong typing, and semicolons. In short, a language designed to cripple the creativity of the typical Real Programmer. Fortunately, the language adopted by DoD has enough interesting features to make it approachable: it’s incredibly complex, includes methods for messing with the operating system and rearranging memory, and Edsgar Dijkstra doesn’t like it . (Dijkstra, as I’m sure you know, was the author of “GoTos Considered Harmful” – a landmark work in programming methodology, applauded by Pascal Programmers and Quiche Eaters alike.) Besides, the determined Real Programmer can write FORTRAN programs in any language.
The real programmer might compromise his principles and work on something slightly more trivial than the destruction of life as we know it, providing there’s enough money in it. There are several Real Programmers building video games at Atari, for example. (But not playing them. A Real Programmer knows how to beat the machine every time: no challange in that.) Everyone working at LucasFilm is a Real Programmer. (It would be crazy to turn down the money of 50 million Star Wars fans.) The proportion of Real Programmers in Computer Graphics is somewhat lower than the norm, mostly because nobody has found a use for Computer Graphics yet. On the other hand, all Computer Graphics is done in FORTRAN, so there are a fair number people doing Graphics in order to avoid having to write COBOL programs.
THE REAL PROGRAMMER AT PLAY
Generally, the Real Programmer plays the same way he works – with computers. He is constantly amazed that his employer actually pays him to do what he would be doing for fun anyway, although he is careful not to express this opinion out loud. Occasionally, the Real Programmer does step out of the office for a breath of fresh air and a beer or two. Some tips on recognizing real programmers away from the computer room:
At a party, the Real Programmers are the ones in the corner talking about operating system security and how to get around it.
At a football game, the Real Programmer is the one comparing the plays against his simulations printed on 11 by 14 fanfold paper.
At the beach, the Real Programmer is the one drawing flowcharts in the sand.
A Real Programmer goes to a disco to watch the light show.
At a funeral, the Real Programmer is the one saying “Poor George. And he almost had the sort routine working before the coronary.”
In a grocery store, the Real Programmer is the one who insists on running the cans past the laser checkout scanner himself, because he never could trust keypunch operators to get it right the first time.
THE REAL PROGRAMMER’S NATURAL HABITAT
What sort of environment does the Real Programmer function best in? This is an important question for the managers of Real Programmers. Considering the amount of money it costs to keep one on the staff, it’s best to put him (or her) in an environment where he can get his work done.
The typical Real Programmer lives in front of a computer terminal. Surrounding this terminal are:
Listings of all programs the Real Programmer has ever worked on, piled in roughly chronological order on every flat surface in the office.
Some half-dozen or so partly filled cups of cold coffee. Occasionally, there will be cigarette butts floating in the coffee. In some cases, the cups will contain Orange Crush.
Unless he is very good, there will be copies of the OS JCL manual and the Principles of Operation open to some particularly interesting pages.
Taped to the wall is a line-printer Snoopy calender for the year 1969.
Strewn about the floor are several wrappers for peanut butter filled cheese bars (the type that are made stale at the bakery so they can’t get any worse while waiting in the vending machine).
Hiding in the top left-hand drawer of the desk is a stash of double stuff Oreos for special occasions.
Underneath the Oreos is a flow-charting template, left there by the previous occupant of the office. (Real Programmers write programs, not documentation. Leave that to the maintainence people.)
The Real Programmer is capable of working 30, 40, even 50 hours at a stretch, under intense pressure. In fact, he prefers it that way. Bad response time doesn’t bother the Real Programmer – it gives him a chance to catch a little sleep between compiles. If there is not enough schedule pressure on the Real Programmer, he tends to make things more challenging by working on some small but interesting part of the problem for the first nine weeks, then finishing the rest in the last week, in two or three 50-hour marathons. This not only inpresses his manager, who was despairing of ever getting the project done on time, but creates a convenient excuse for not doing the documentation. In general:
No Real Programmer works 9 to 5. (Unless it’s 9 in the evening to 5 in the morning.)
Real Programmers don’t wear neckties.
Real Programmers don’t wear high heeled shoes.
Real Programmers arrive at work in time for lunch. 
A Real Programmer might or might not know his wife’s name. He does, however, know the entire ASCII (or EBCDIC) code table.
Real Programmers don’t know how to cook. Grocery stores aren’t often open at 3 a.m., so they survive on Twinkies and coffee.
What of the future? It is a matter of some concern to Real Programmers that the latest generation of computer programmers are not being brought up with the same outlook on life as their elders. Many of them have never seen a computer with a front panel. Hardly anyone graduating from school these days can do hex arithmetic without a calculator. College graduates these days are soft – protected from the realities of programming by source level debuggers, text editors that count parentheses, and user friendly operating systems. Worst of all, some of these alleged computer scientists manage to get degrees without ever learning FORTRAN! Are we destined to become an industry of Unix hackers and Pascal programmers?
On the contrary. From my experience, I can only report that the future is bright for Real Programmers everywhere. Neither OS/370 nor FORTRAN show any signs of dying out, despite all the efforts of Pascal programmers the world over. Even more subtle tricks, like adding structured coding constructs to FORTRAN have failed. Oh sure, some computer vendors have come out with FORTRAN 77 compilers, but every one of them has a way of converting itself back into a FORTRAN 66 compiler at the drop of an option card – to compile DO loops like God meant them to be.
Even Unix might not be as bad on Real Programmers as it once was. The latest release of Unix has the potential of an operating system worthy of any Real Programmer. It has two different and subtly incompatible user interfaces, an arcane and complicated terminal driver, virtual memory. If you ignore the fact that it’s structured, even C programming can be appreciated by the Real Programmer: after all, there’s no type checking, variable names are seven (ten? eight?) characters long, and the added bonus of the Pointer data type is thrown in. It’s like having the best parts of FORTRAN and assembly language in one place. (Not to mention some of the more creative uses for #define.)
No, the future isn’t all that bad. Why, in the past few years, the popular press has even commented on the bright new crop of computer nerds and hackers ( and ) leaving places like Stanford and M.I.T. for the Real World. From all evidence, the spirit of Real Programming lives on in these young men and women. As long as there are ill-defined goals, bizarre bugs, and unrealistic schedules, there will be Real Programmers willing to jump in and Solve The Problem, saving the documentation for later. Long live FORTRAN!
I would like to thank Jan E., Dave S., Rich G., Rich E. for their help in characterizing the Real Programmer, Heather B. for the illustration, Kathy E. for putting up with it, and atd!avsdS:mark for the initial inspriration.
Feirstein, B., Real Men Don’t Eat Quiche, New York, Pocket Books, 1982.
The focus of this project is to build a super reliable, durable, and stable network device from tried and tested tech. This is not a project for pushing the limits or testing out flashy new stacks. This affinity for ‘boring’ technology will reflect on most of the choices made here, from the hardware to the way we configure services and daemons.
Of course. Well, the transaction was in person and in cash (of course.)
It was just a little bag of weed sold through an Arpanet account in Stanford’s artificial intelligence lab in 1972. It’s not clear who was in on the sale aside from the students, but despite the underhanded nature of the deal, anyone with knowledge of the sale who wasn’t a square must have been excited about the implications of this early use of the Internet.
As the article clarifies:
The first online sale that we’d recognize as such today, complete with credit card information and the United States Postal Service, wasn’t until 1994. On August 11 that year, Dan Kohn sold a copy of the Sting album Ten Summoner’s Tales to a man in Philadelphia for $12.48 plus shipping, paid via encrypted credit card. Kohn later bragged, “Even if the N.S.A. was listening in, they couldn’t get his credit card number.”
(Emphasis mine.) Indeed, Gregory. When USB cards go missing, one needs formal training in Algorithms, Data Structures, the Theories of Computation and Complexity, Formal Logic (of course), and more, to express appropriate outrage at an election that’s fraudulent only in your head and only because your guy didn’t win.
The software industry is currently going through the “disposable plastic” crisis the physical world went through in the mid-20th century (and is still paying down the debt for). You can run software from 1980 or 2005 on a modern desktop without too much hassle, but anything between there and 2-3 years ago? Black hole of fad frameworks and brittle dependencies. Computer Archaeology is going to become a full-time job.
I’m annoyed every time I have to use the infernal thing.
It tries (poorly) to be something other than a damn TV remote1.
There’s no way to tell which end is up.
There’s no accidental tap detection when you pick it up.
It’s way too small.
It’s way too slippery.
I use Siri to skip forward and backward because the edge clicks are unmemorable and dysfunctional.
I use the iPhone app when I can and, while I can’t stand the terribly implemented inertial scroll, still find it better than the hardware.
Inertial scrolling does in fact exist on the Siri remote, but the effect is muted. The on-screen movement doesn’t accurately reflect your swiping — scrolling is staggered and it often stops abruptly, when you don’t intend to stop. This makes aspects of navigation, like manual search or entering your email address or password, extremely cumbersome.
I’d also like to point out that unlike every single horror I’ve ever witnessed when looking closer at SCM products, git actually has a simple design, with stable and reasonably well-documented data structures. In fact, I’m a huge proponent of designing your code around the data, rather than the other way around, and I think it’s one of the reasons git has been fairly successful
[. . .]
I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important. Bad programmers worry about the code. Good programmers worry about data structures and their relationships.
Excellent talk by Chris Toomey on Mastering the Vim Language. Features a lot of must-read Vim resources and nice-to-have plugins. Key takeaway for me: Prefer text objects to motions when possible (corollary: “Is this repeatable?”)
Greg Sullivan, a MicroSoft product manager (henceforth MPM), was holding forth on a forthcoming product that will provide Unix style scripting and shell services on NT for compatibility and to leverage UNIX expertise that moves to the NT platform. The product suite includes the MKS (Mortise Kern Systems) windowing Korn shell, a windowing Perl, and lots of goodies like awk, sed and grep. It actually fills a nice niche for which other products (like the MKS suite) have either been too highly priced or not well enough integrated. An older man, probably mid-50s, stands up in the back of the room and asserts that Microsoft could have done better with their choice of Korn shell. He asks if they had considered others that are more compatible with existing UNIX versions of KSH.
The MPM said that the MKS shell was pretty compatible and should be able to run all UNIX scripts.
The questioner again asserted that the MKS shell was not very compatible and didn’t do a lot of things right that are defined in the KSH language spec. The MPM asserted again that the shell was pretty compatible and should work quite well.
This assertion and counter assertion went back and forth for a bit, when another fellow member of the audience announced to the MPM that the questioner was, in fact David Korn of AT&T (now Lucent) Bell Labs. (DavidKorn is the author of the KornShell).
Uproarious laughter burst forth from the audience, and it was one of the only times that I have seen a (by then pink cheeked) MPM lost for words or momentarily lacking the usual unflappable confidence. So, what’s a body to do when Microsoft reality collides with everyone else’s?
Vicki Boykis’ excellent article on every aspect of ‘Data Science’ I can think of: a little history, employment prospects, skills, education, and continuous learning.
It would appear that more than half the job, at least, is wrangling (replicating, cleaning, imputing, transferring, understanding, augmenting) data. It’s boring and super-important so, of course, is the least favorite thing 🙃
Back in the second century BC, Cato the Elder ended his speeches with the phrase ‘Carthago delenda est,’ which is to say, ‘Carthage must be destroyed.’ It didn’t matter what the ostensible topic of the speech was: above all, Carthage must be destroyed.
I don’t know what my newfound affection for it says about me. Via HackerNews.
[. . .] Gandhi tends to be the first to use nuclear weapons, and spares no expense on wiping your civilization off the map. You probably always thought you were crazy — how could a series that prides itself on historical accuracy portray Gandhi so wrong? Well, you’ll be happy to know that both your sanity and Civilization’s historical integrity aren’t at fault. Instead, a bug’s to blame.
In the earlier Civs, leaders are given a set of attributes that dictate their behavior. One such attribute is a number scale associated with aggressiveness. Gandhi was given the lowest number possible, a rating of 1. However, when a civilization adopted democracy, it granted a civilization -2 to opponent aggression levels. This sent Gandhi’s rating of 1 into the negative, which swung it back around to 255 — the highest possible rating available, and thus, the infamous warmonger Gandhi was born.
This cyclical aggression scale was fixed in later versions of the game, but Gandhi wasn’t totally cured of his bloodlust. The team fixed Gandhi’s aggression rating, but as an Easter egg paying homage to the earlier aggressive versions of Gandhi, ramped his nuke rating through the roof. So, while it may be difficult to push Gandhi over the edge, he goes from zero to nuclear option once you do.
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I’m not aware of them.
Gave Hugo a try and was quite impressed by the ease and speed. The official documentation kinda sucks at introducing key ideas (like taxonomies) in a gradual way that’s helpful to newcomers, but is great for variable and function references. Found thesetwo posts very helpful. Here’s another that explains template variable scope well. And another that goes over theme development step-by-step.
Sticking to Jekyll for now since
I don’t post that often and can wait a minute for recompilation if/when I have that many posts
Hugo is as insanely fast as advertized. I love the section and taxonomy abstractions, myriad content types, and I18n support. I’d use it to build any static website that’s not a blog. For now, Viva Jekyll.
An article on how Baud Rate isn’t the same as Bit Rate
Baud rate refers to the number of signal or symbol changes that occur per second. A symbol is one of several voltage, frequency, or phase changes. NRZ binary has two symbols, one for each bit 0 or 1, that represent voltage levels. In this case, the baud or symbol rate is the same as the bit rate.
Can’t Unsee is “Spot the difference” for UI nerds. 6530. On the “hard” sections, wondered how much the minutiae matter if a user is unable to discern the difference between two comps after a few seconds.
In an age where we interact primarily with branded and marketed web content, Cameron’s World is a tribute to the lost days of unrefined self-expression on the Internet. This project recalls the visual aesthetics from an era when it was expected that personal spaces would always be under construction.
Numbering is done with natural numbers. Let’s take zero to be the smallest natural number1. For the sequence (2, 3, 4, … ,12), using the convention (2 ≤ n < 13) is appropriate because
For a sequence starting with zero, like (0, 1, 2, 3), the left hand condition leaks into unnatural numbers if you use “less than”: (-1 < n).
For an empty sequence, the right hand also leaks into the unnatural if you use “less than or equal to”: (n ≤ 0)
And minorly, because these are the true of another convention (1 < n ≤ 12)
Difference between bounds (13 - 2 = 11) is the length of the sequence
I know that these two sequences are adjacent: (2 ≤ n < 13) and (13 ≤ n < 24)
All that’s prep for:
When dealing with a sequence of length N, the elements of which we wish to distinguish by subscript, the next vexing question is what subscript value to assign to its starting element. Adhering to convention a) yields, when starting with subscript 1, the subscript range 1 ≤ i < N+1; starting with 0, however, gives the nicer range 0 ≤ i < N. So let us let our ordinals start at zero: an element’s ordinal (subscript) equals the number of elements preceding it in the sequence. And the moral of the story is that we had better regard – after all those centuries!2 – zero as a most natural number.
There’s also this little nugget
I think Antony Jay is right when he states: “In corporate religions as in others, the heretic must be cast out not because of the probability that he is wrong but because of the possibility that he is right.”
🤦♂️ The portion of the article that listed functionally similar packages and is-* packages was particularly dismaying. As he points out, there’s a good reason why jQuery and lodash are as immensely popular as they are1.
In addition, engineers have commoditized many technical solutions that used to be challenging in the past 15 years. Scaling used to be a tough challenge, not any more for many companies. In fact, part of my daily job is to prevent passionate engineers from reinventing wheels in the name of achieving scalability. It’s not because we don’t need to solve scalability problems, but because the infrastructure is good enough for most of companies. Building and operating so called “big data platform” used to be hard, not that hard any more. Building machine learning pipeline used to be hard, not that hard any more for many companies. Of course, it’s still challenging to build a highly flexible and automated machine learning pipeline with full support of closed feedback loop, but many companies can get by without that level of maturity.
The problem isn’t CPU power. The CPU on any modern PC is going to blow away the processing power of any sort of network switch you’d care to buy except the really high-end ones. (Really high end. So high end that unless you already know them by name you are not going to want to buy them)
Offloading to the GPU would make things worse, not better.
The problem is latency. It takes time for the PC to take the buffer from the NIC, copy it to the to the main memory, process it on the CPU, copy it back down into a buffer, and then push it out to the network. All this copying around takes time. You could have a 30000 GHZ processor and it’s not going to help you out any.
No amount of programming or GPU offloading is going to make your I/O faster or have less latency. This needs to be done in the hardware. PCs are not designed to handle this. They are designed to have huge cache’s were you take a huge amount of data and process it through loops. This is exactly the sort of thing you do NOT want on a switch.
With a switch you want small buffers. You want small buffers optimized to the speed of the networks they are connected to and have the ability to shuffle information from one port to another. You want to get the information in and out as quickly as possible.
That being said I have no doubt that a Linux switch based on commodity hardware would have no problem keeping up with a 1Gb/s or even 10Gb/s network and having performance similar to any typical corporate switch.
The problem then is one of cost, energy, and space. A network switch takes up almost no room on a rack. It uses little electricity and creates little heat compared to a PC-style corporate Linux server. It has lots and lots of ports.
To create a Linux commodity-based switch with 20 or 40 ports the thing is going to be huge, expensive, and hot.