Some random thoughts on programming

Since this post actually got read by people I made a follow up to clarify some of what I was saying. Thanks internet for helping me realize how bad I am at making my point.

As I near the end of my third decade programming, I keep finding myself coming back to pondering the same kind of questions.

  • Why are some people good at this and some are not?
  • Can the latter be trained to be the former?
  • What is the role of language, development environment, methodology, etc. in improving productivity?

I’ve always been partial to the “hacker”. The person who can just flat out code. For some people this seems to come easy. I have been lucky enough to work with a few people like this and it has been awesome. As you talk with them about a problem they are already breaking it down and have an idea of how the final system will work. I don’t mean that other aspects of professionalism, experience, etc. aren’t important its just that I believe in the end there is a raw kind of talent that you can’t teach. My wife can sing, I can’t, and not for lack of practice.  She practices, but she can also belt out a perfect tune without hardly trying.

What are the core talents that makes someone a good hacker? I think there are two:

  1. The ability to break down an idea/system into an abstract set of “concepts and actions”
  2. The ability to understand how these “concepts and actions” execute in a particular environment.

Note that I didn’t say anything about coding. Coding is actually the least interesting part. If you can do the others well, coding is just another kind of typing. Coding is the thing your fingers do while your mind is flipping between 1) and 2). Learning a new language is less about learning the syntax than it is about learning the execution model.

Writing good, correct code involves not just conceiving of how something should be 1) but also understanding all that can happen 2).

I honestly don’t know if these things can be taught.

In 30 years I have only seen a few examples where a mediocre programmer became good or a good one became great and zero examples of a mediocre programmer becoming great.

What does this mean for people who want to build large software systems? The easy answer is to only hire great people. Which is fine when you can do it but not everyone can be or hire the best so what is left for the majority?

What should be done to make things easier for people aren’t in the top 5%?

There are lots of nice things that have been invented like source code control, higher level languages, garbage collection, etc. which reduce or abstract away some of the problems. And while sometimes these abstractions are leaky, they are really one of the few weapons we have against the beast called Complexity. Complexity is the key problem we face, especially if we are not wizards because:

A persons talent with regard to 1)  and 2) above determine how much complexity they can deal with.

When I was in gradual school at UNC-CH Fred Brooks (the mythical man month guy) used to talk about Complexity as being a major thing which makes programming hard. Its not something easily cast aside:
Quoting Dr Brooks:

The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence.

I wish I had appreciated everything Fred said when I was a cocky grad student as much as I appreciate it today… sigh.

Some things are hard because they are hard and not much can be done about them. Still our goal needs to be to not try to make it worse than it is.

To this end, some people say:

Do the simplest thing that will work.

I agree but prefer this formulation:

Strive for the simplest thing that can’t possibly fail.

If you think about what can go wrong you are more likely to find the problems than if you think about what will go right.

I wish I could say we were making progress in reducing complexity but we seem to be adding it into our programming environments rather than removing it. One of the things I have thought about recently is the complexity of these abstractions themselves. Programming languages of today are so much more abstracted and complex than what I learned on (assembler, basic, pascal, C).

For example the world (or at least the web) is currently engaged in writing or rewriting everything in Javascript. Not because JS is so awesome, but because 15 years ago it got stuck into browsers, then we all got stuck with browsers and thus we are all stuck with JS. Sure there’s lots of server stuff done in various other languages but really how long can that last? How long can one justify having to write 2 parts of an application in 2 different languages?

The future looks like node.js.

I am not a JS hater. Its just that I used to think Javascript was a simple language. As someone who wrote 3D Game engine code in C++, my thought was “if these webkids can do JS then it can’t be that complicated”. After building a webapp using Javascript/Dojo/svg to create a realtime browser for genetic information, I realize how wrong I was.

Sure its all nice and simple when you are doing this:

$(“#imhungry”).click(function() { alert(“food”);});

However a lot of complexity lurks just under hood. And its the way it all interacts that concerns me. Crockford called it  Lisp in C’s clothing. This should excite a few people and scare the crap out of the rest.

JS has lots of opportunity for expressing yourself in interesting ways. Expressiveness is good, but too much of it can create more problems than it solves. Back when C++ was a bunch of source code you downloaded from at&t and you had to compile the complier yourself before you could use it, we grad students stayed up late eating bad fried chicken and debating the merits of O-O programming. It was at this point I began to wonder if the productivity gains of O-O were more the offset by the time spent debating about what good O-O practice is. This tape has played in my head more than a few times in my career.

Its not all bad of course: On balance, garbage collection is a win and I see lots of benefits to using closures in JS, but the underlying complexities of things escape most people and their interactions are understood by even fewer. If you have to slog through this (and you should) to understand what happens when you call a function, you are cutting out alot of people from fully understanding what is going on when thier code executes.

I hope that some of this will be address by standards of practice. Things like Google’s guidelines seem to be a good start.  As with C++, I think that the best hope for improving productivity will actually be to agree on limits on how we use these tools rather than seeing how far we can push the envelope.

Well except for wizards and we know who we are 🙂

43 Replies to “Some random thoughts on programming”

  1. Right On. Nice Article… I’m on year 28 of programming – and I really liked this article.

    P.S. I am a JS hater and .Net hater too (though good at both) because P-Code and Compile On demand are Slower and Make it necessary to obfuscate your code if you have intellectual property concerns.

  2. 22 years ago when I started in programming I could mess about at home doing a bit of BASIC and at work I wrote green screen sales and invoicing apps in COBOL ’74 using ISAM files. Nowadays programming is many times more difficult- ISAM files have been replaces with all singing all dancing SQL databases with a host of frameworks designed to access them (no in-language stuff as in COBOL with ISAM) – the green screens have been replaced with name your weird and wonderful GUI framework of choice (WPF in my case) and the languages themselves are now many, varied and unreadable to those not in the know.

    I much prefer doing development work now but there are times when I still hanker after a bit of green screen/ISAM/MOVE W01-FOO TO W02-BAR

  3. Yeah, I’ll have to admit that was pretty random. I am an ok programmer, not great, but better than mediocre. I approach every job the same however by looking at what needs to get done and then by what is going to be required later to maintain it. Sometimes being OK is all you need to be.

  4. Right on. After working in computers and programming for 27+ years, I have to agree. Programming is, unfortunately, something you inheritly understand or you don’t. You can teach semantics, but not the thought processes that go along with it. That being said, great programmers start as mediocre programmers, but quickly rise. Mediocre programmers tend to rise to somewhat competent and then stay there. That’s, unfortunately, the way it is.

    Great article.

  5. Sometimes the “hacker” is just someone who’s very familiar with the problem domain. An average programmer may appear to be a “hacker” to their coworkers when they’re re-implementing a system very similar to what they implemented at two other jobs.

    They already know what works best, and the mistakes they’ve made in previous implementations.

    I think most people don’t program as well as their potential because the best practices haven’t been communicated to them. For example, I see many programmers who don’t seem to know that reducing function length is a good way to keep complexity manageable.

    The best practices are still being worked out; the field of software development is still in its infancy, compared to more mature disciplines such as mathematics and mechanical engineering.

  6. Alan is spot on here. Perhaps the idea that many programmers overlook useful, basic heuristics for programming (such as “shorter functions imply more manageable code, so break down your functions until they are small”) ties in with one theme of the OP, that programming has become more complex in terms of frameworks and maybe language expressiveness.

    When I started programming, it was on the horrible bare-bones no-frills Commodore 64 Basic interpreter, followed a couple of years later by a lovely language called GFA Basic on the Atari ST, and then by 68k assembly and C. Maybe those formative years of just playing about, writing programs with minimal focus on libraries, industry standards and “frameworks”, were helpful in learning how to solve problems and write code.
    But these days I spend more time reading articles about flashy programming/library techniques than actually thinking about and writing programs.

    It could be the case that novice programmers nowadays jump too early onto this wheel, without spending the time doing simple, pure programming, so that they don’t sufficiently develop these key abstractions and heuristic tricks for managing complexity.

  7. Couldn’t agree more about programming skill or JS. The best programmers I’ve ever known generally don’t have a comp-sci education (myself included). In fact, the single best I’ve ever known personally was an English major a couple of papers away from his Doctorate. He found programming and never looked back. As far as JS, I agree that its “appeal” (for lack of a better word) is simply because it’s the only trick in town. As a language it sucks for anything more than basic functionality. Although I use jQuery professionally, I find it sad that we have to resort to bloated frameworks and layers of abstraction to try and hide the basic weakness of the underlying language.

  8. Oh, I guess for that post you’ll earn a lot of flames from mediocre guys who think that they will be great one day. Especially nowadays when everyone and his mother see themselves as coders because they can hack up some javascript/ruby code 🙂

  9. I majored in Mathematics (not applied) and only became interested in programming in my junior year of college. I only started to really learn quality programming skills after leaving college. I find that I often know exactly how to solve a problem before the veteran programmers at my job even have a chance to really think about it. The problem for me is spending the time making sure my code is well structured and maintainable. When working with other programmers, I end up being the one explaining the problem and how we’re going to solve it over and over. My own experience, which I’ll be the first to point out is very limited, has indicated that it’s my experience with the theoretical aspects of computer science and complexity theory, as well as all my time spent proving abstract theorems, that gives me an advantage over the people who just know how to code. Maybe if programmers focused less on languages and more on things like linear algebra, abstract algebra, graph theory, etc. then they would be much better at their job. just my novice opinion.

  10. “Do the simplest thing that will work.”

    Not sure if you will even read this, but:

    That is a great statement *but* I think it is critical to mention what I consider a serious and way too often overlooked problem. Lets say you start with a great code base. Then you amend/change it ‘as simply as possible’ to add a new feature. And again for a new feature, and again. Every step being done ‘simply’ actually adds unnecessary to the complexity of the program. What really needs to happen is the simplest implementation that incorporates *all* features, including the ones that had previously been implemented. And this takes more time and more effort in the short term, but boy does it save on time and effort in the long term.

  11. I think that the best hope for improving productivity will actually be to agree on limits on how we use these tools rather than seeing how far we can push the envelope.

    This was the philosophy that brought the world Java. It failed, and it failed definitively with J2EE. If you took this idea seriously for even a fraction of a second, you have failed to learn the history of programming, and are doomed to repeat its mistakes.

    I’m sorry, it’s just the truth.

  12. Since my comment disappeared into a black hole with no feedback, I can only assume it’s gone into some moderation zone, which means I should assume people will read it, which means I should probably be less obnoxious about it, so: Java was designed to prevent people from fucking up. It did not; instead it prevented people from writing great software. That’s in the nature of design approaches based on minimizing the damage that the incompetent can do. The incompetent can fuck up anything; the only thing you achieve with standards and practices is restraining the so-called “wizards,” who create all the advances that save everybody else so much time. Exercising control to prevent incompetent mistakes is a sucker’s game; freeing up the language to “give them enough rope” results in great libraries which make everybody more productive (or at least, everybody who uses them).

    I’m rehashing a debate for you here from 2006. It comes from the days when people were saying Rails would never be accepted in corporations, and, more to the point, your advocacy of standards and practices to prevent people from fucking up is the same argument that was coming from the people who said Rails would never be accepted in corporations. Their logic was clean and their conclusions were false; to anyone who was reading those debates, your argument was, I’m sorry, completely discredited almost five years ago now.

    Sorry to go off in such detail on your final paragraph, which was kind of a tangent to your overall post, it just drives me fucking crazy that people settle these arguments definitively, only to see them revived a few years later and started over completely from scratch. “Those who don’t study history are doomed to repeat it,” etc.

  13. I think it depends on how draconian the limits are. I don’t think the google guidlines for instance are any kind of crushing blow to innovation. I just don’t think that just because a language allows something you should necessarily do it.

  14. Ok. Thanks for the longer comment. I’m learning alot about how bad I am at making my point.

    Putting people on rails (sorry couldn’t resist) isn’t going to make bad programmers good. Crap can come from anything. That wasn’t what i wanted to say.

    I’m more interested in keep some kind of boundary so that what results is maintainable enough by other people.

    I Actually pushed for doing a recent project in python/django rather than java etc because I value the benefits which some of these newer, better langauges give you.

    I just think with great power comes great responsibility. The cool meta-programming trick you did is awesome and personally I will give you props for it. I just hope the next guys can figure it out too.

    Also I think that not all “rope” as you say is equal. Some things allow for great expressiveness and freedom while not adding much to complexity. I don’t use ruby so I’m not an expert on that, but in python there are function decorators, its a cool syntax for people to add some functionality. But its pretty explicit when its used. @login_required etc. Javascripts closures seem to be able to happen almost by accident or at least its not as explicit to someone looking at the code for the first time. Doesn’t mean we shouldn’t use them. I did today, but we need to be aware of what we are doing. And no, i am not trying to start a python/javascript debate.

  15. I actually think the answer to the problem is small business. Seriously – I think the rarity of great programmers and the PITA factor of maintenance create a perfect storm which only small business can answer. Consider Brooks’s rule that adding programmers to a late project makes it later; if you want code that works well, you need very good programmers to take personal responsibility for it, and the smaller your team, the better. The program with corporate programming projects is not actually the programming or the projects, it’s the corporate. That type of work is inherently doomed because a high level of code quality requires a particular type of ownership, both figurative and literal, and you can’t get that kind of ownership if the business isn’t structured for programmers to own things.

    Pretty much everybody who does cool meta-programming tricks decides to simplify them sooner or later. The real value in the meta-programming is not the mind-bendy or whatever, it’s the fluidity of your model. The more involved your (so-called) meta-programming, the more likely you need to clean up the foundations of your design, but a language with lots of meta allows you to forge ahead anyway and clean up afterwards, whereas with a language without lots of meta, you have a less fluid design, so a wrong assumption at the foundation means you’re just fucked and you’ve got to do things the hard way from there on in.

    If Mr. Meta and Mr. Maintenance are the same guy, there’s a good chance the inevitable refactoring comes and is clean and deep; it’s turnover that changes meta-programming from a useful technique into a maintenance liability. When you own the app you work on, however, you have a powerful incentive to curtail your meta-programming and constrain it only to useful situations. The emerging business landscape, lots of small-biz web apps linked by HTTP APIs, gives us much better reliability, and less hassle-prone maintenance, comparative to the old-school model of giant corporations with large numbers of theoretically equivalent programmers operating as cogs in machines.

  16. Good one Headspin

    on “Strive for the simplest thing that can’t possibly fail.”, a case history.

    As design team head for an inventory system (8 warehouses, 250+ stores, lotsa trucks) we used an imaginary Dr. Y who would attack every transaction with wildly wrong entries. We designed and coded so Dr. Y could not screw things up.

    Then the following sequence happened:
    1. The prototype went live for several departments, and
    2. Some of the warehouses had to bust walls into doors for more trucks, and
    3. Eighteen months later the first bug had not yet been discovered.

    For those of you who had Fred Brooks talks, or are otherwise wise about inventory, turn up 38%, profits up 25%, out of stocks down 85%.

    Hey CollegeGrad,

    Watch the hubris, “I find that I often know exactly how to solve a problem before the veteran programmers at my job even have a chance to really think about it.”

    Suppose you are two s.d.s out on the normal curve, the other programmers probably average one s.d., so of course you have to explain ‘over and over again’ but it does not have anything to do with ‘veteran’. I started in my teens at IBM in ’65 and I still have to explain, over and over, all sorts of things to self impressed hotshots and some of them cannot code cleanly until about nine months experience and a lot of code reviews until they ‘get it’.

    The real greenies are slower to catch on than moderately experienced. As Headspin says, some never get it and they are shoveled off to where they cannot do more harm, even young ones who got 4.0 in CS courses sometimes cannot hack the real (design and coding) world and are sent to QA or the web sites.

    Keep in mind, young’uns, that great programmers, like Steve Gibson, will keep on blowing your nappies off until he is, maybe 80 or so.

  17. I guess what could make the difference and what I’m just about to understand: program text is “obfuscating” concepts and bringing them down to a textual and linear representation. for example a “scene graph” for 3d programming: a very simple concept, but to write it down step by step and at different text locations makes it more difficult – I never wanted to program around things, I wanted to find the “essence”.

    All software tools and many libraries try to create the illusion that you don’t have to know how they organize the particular problem, when in reality you have to know exactly that to make sense of them. sadly, all libraries try to create the illusion things “become more easy” when in reality it all is just the reduction of redundancy (or some times the introduction of new ideas).

  18. what i meant: the “good” programmers maybe just do what they want and don’t care about efficiency too much for the first steps. i always sought after the deeper “scheme behind all things” which is a “distraction”. i would never have recreated a graph using arrays (the only useful dynamic variable accessible for a novice) because it would have been so weird and over-complicated to do it that way and i thought nobody else would do something like that and there “ought” to be a simpler mechanism for that (maybe “json” is that mechanism for today)

  19. Seriously, you all (no mater how long has been programming) are saying things that you can find repeated everywhere. From the “hacker myth”, to the “small company myth”…

    Companies can’t blog, people do. Every time I read a post about someone saying something about how stupid/bad is some language/framework/methodology I ask myself: does this person work in a project with more than 10 people?

    Most of the times that’s not the case, in fact in 99% of the cases it’s not. What I’m trying to say is that real proyects are running and working everyday. We are not asking if this projects will fail, but if the programming job will be fun/beautiful/great. And you know, reality is hard: most projects works because bad programmers can contribute code to big projects, bad programmers that you can replace any time. And all the work in this projects is a pain, and programmers life an horror. That’s what reality says of real projects.

    We can now ask ourselves if this is right or not, but that’s what works in 99% of the cases. No mater the size of the company or the size of the project.

    Agile methodologies, agile frameworks, agile languages… all that things can’t scale to code bases of hundred of thousands (or millions) of lines of code, with different generations of people working on them.

    I hate Java, really, I program in Python if I can. But Java let you create big code bases with normal people. And companies are great if they can make normal people create great things. All this stupid search for … talent? is the last light of an era that is ending. Now that all companies are out of cash you need normal people with normal salaries do the work. The best and more profitable companies are the ones that make profit from normal workers, not the talented ones.

    And programmers are just a cog in the machine, now system administators are as important as them in many web companies. And you know, they can have a real life, they can be sysadmins with 40 years or 50 years without too much problem.

    I suppose that most of us, developers, entered in the field because we loved the idea of being able of changing the world from our keyboards. The sense of power was like a drug. But reality is that: 99.99% of use will never change the world. We will only contribute to things that make the world run as usual. I think that it’s a myth the idea of a programmer saving a company (a big one) because the amount of work that a programmer can do (no mater how great she is) is limited. He has to relay in others. But we all lone “machos” who can code everything by ourselves… isn’t it?

    Sorry for the long commit, but that’s the 10000 post I read about how cool will be to have “great” programmers everywhere, while the writer forget one of the main rules of engineering:'s_law . No mater how many cool great programmers you have, their contribution will not be that great 😉

  20. Doh! I wrote “commit” instead of “comment” xD, that’s what programming do to your brain.

  21. There’s something about incompetent people tending to overestimate their competence and incorrectly judge the competence of others. I would be careful about claiming that I was a wizard, let’s just leave it at that.

  22. Any competent DIYer can build a dog kennel without a blueprint. But try building a 60 story hotel complex without lots of them! Same too with programs. Once you can’t keep all the details in your head you need to analyze, design and even plan. Of course, you get a bit finicky about the stress grade of the concrete when you have a skyscraper balancing on it. Same too with the frameworks and tools. What works for a two guys in the garage project doesn’t work for a telco or insurance co.

  23. why does every one get so hot under the collar about this stuff?
    There’s all these people screaming and shouting at each other. Do they beahve like that in the ‘real world?’ Are you all mental?

  24. Unfortunately people are trying to use NodeJS for things it was not meant to be used. It was conceived so that people could write easier socket applications. It would be very sad to shift all the way through as a server side scripting environment.

    While there are already a series of PoCs related to NodeJS and web applications, let’s hope it doesn’t get wide adoption.

  25. Pingback: Anonymous
  26. I just started my career as a junior web programmer. I don’t consider myself as a good programmer (due to lack of experience, mostly). I think it’s possible to learn to be a better problem solver, though (and thus a better programmer). Simply solve more problems. You can become a critical thinker by doing so. I think that the difference between a good programmer and a great programmer is motivation.

  27. I’ve had to deal with JS a lot just lately and I have to say I’m not crazy about it. I’m not sure if it’s the dynamic typing or the lack any easy-to-use OO stuff, but I was wondering about something you said. ‘JS got stuck in browsers and we all got stuck in browsers and so we’re all stuck with JS’ … why is that?
    How hard would this be?
    Step I : Make a strongly(not statically) typed, OO heavy, interpreted language? Something like Ruby(which I like), but with type decorators so a parser could check things for you and help with refactoring.
    Step II: make an add-in for IE, FF, and Chrome, and support this language, which is downloaded plaintext and interpreted, just like JS.
    Step III: everyone will start to like it, maybe FF and Chrome will start to support it natively.(IE will refuse until 10 yrs after its old news and then furiously try to jump on the bandwagon)

    Is that a crazy idea?

  28. @josé maría:
    “Now that all companies are out of cash you need normal people with normal salaries do the work”
    What that sounds like, at least to me, is that you are saying that anybody can write code, so companies need to just get somebody to write the code, pay them a clerk’s wages, and then the company will have it’s problems solved.

    I spent some time trying to teach programming to students at a business college as a part-time instructor. (I couldn’t afford to do it full time because the pay was too low.) I ahd wondered why so many crappy programmers came out of the business college and, when they had a part time instructor opening, I decided to find out for myself. The origin of the problem was that, when a potential student came in and the “counselor” asked them what they wanted to do, if the potental student said, “Make money.”, the counselor put them in the Computer Programming track. I’m here to tell you that not everyone can write code, much less good code (and forget about excellent code).

    I’ve been in the profession over 40 years (going on 42 now) and I have learned several things about those in the profession. The author of this blog is correct in that poor coders can be taught to code better but they will seldom become “good” coders and each stage up the line of skill has its limits. I can teach almost anyone to write a program but I can’t necessarily teach them to write it well, much less why to write it or how to tell the difference between a well designed and written program and one that isn’t. Similarly, I can teach almost anyone how to follow the recipes in a cookbook and bake a cake or cook an entre . . . but it takes skill to be able to alter the recipe to make the cake better or to make a different cake and even more skill to have the main entree and the 5 side dishes and the freshly baked biscuits all come out together so that they can all be served together.

    @anonymous boy:
    “the “good” programmers maybe just do what they want and don’t care about efficiency too much for the first steps. ”

    I don’t think that is the approach that “good” programmers take . . . although, I may be thinking of the “good” programmers I worked with in the past and not those of today. A “good” programmer considers the efficiency and does what is needed (not necessarily just what they want to 😉 . . . most of all, though, a “good” programmer considers what might happen to cause problems. A “great” programmer, in one form or another, follows a rule I was taught early in my career, “Code for failure and Success will be a happy by-product . . . code for Success and Failure will be a constant companion.” In other words, if you code to handle all preferred inputs so that they yield the preferred outputs (i.e for Success), you will always find yourself dealing with problems triggered by non-preferred inputs. On the other hand, if you code for the preferred inputs and for anything else, so that you handle the preferred inputs as desired but handle anything else in a specific manner (i.e. you code for the Failures as well as the Successes), then you will find your application working under even the most adverse conditions.

    The difference between a poor programmer, a good programmer, and a great programmer lies, to a great extent in the amount of forethought that goes into the development process . . . and the amount of time spent in design rather than just jumping right into coding.

    @Jay Nicks,
    Right ON!!!!

    “Strive for the simplest thing that can’t possibly fail.”,

    Always remember Einstein’s admonition, “Make everything as simple as possible . . . but no simpler.” 😉

    There used to be a general pride in a “good hack” and doing things not only right but with a simple elegance . . . and “embedding the hooks” on which expansions and additional capabilities could be hung. In the last 15 years, I have found that I tend to look on the next generation of developers with sadness because they seem to strive for complexity while failing to consider the implicit risks involved with complexity and dismissing the prospects of problems with, “Oh we’ll fix that in the next release.” They also seem to denegrate the craftsmanship of veterans and assume that, just because there is grey at the temples, there is fog in the brain.

  29. Walt: Daddy what’s gradual school?
    T. S. Garp: What?
    Walt: Gradual school. Mommy say’s she teaches at gradual school.
    T. S. Garp: Oh Gradual school is where you go to school and you gradually find out you don’t want to go to school anymore.

  30. I think a deeper problem is that the development paradigm hasn’t changed for 50 years, taking the high level Algol 60 language as a base date. People still crank out lines of code in the same general way. If any other area of technology had showed so little change in 50 years we’d be laughing at it. Articles like this trying to decipher why some developers are better than others should perhaps be asking why the heck we haven’t actually figured out yet how to put together reliable and secure product.

  31. Your retirement funds are with the PROGRAMMER … error … portfolio manager on
    Wall Street.
    REAL WORLD paper: “Survivor Bias” Distorts Returns in 41 of 42

    As a geek – older than 50 – I am nearsighted and wear THICK GLASSES.
    can’t see the lines in ASSEMBLY but can fly in Ruby or Python.

    criticisms (no, it is NOT personal)
    1.)observer bias – I think I am really smart and other programmers less smart. ego?
    2.)modern day team with part outsourced touching the Internet. Who is really good and
    who is really not? see security bugs in Linux Kernel and python version upgrading.

    3.)theme: smart people are born not made. versus the TEN THOUSAND HOURS
    thesis. human brain is highly malleable, but??

    4.)in today’s world – variance between good to fair to bad programming skillsets,
    grade level A,B,C,D, E,F is large.

    5.)writing KLUDGY and weak python on MULTI-CORE and now IBM optical chip (with
    no copper communications bus) – brute force – throw hardware at the problem.

  32. I started self-taught in high school in the mid to late 1960s. FORTRAN, COBOL, Algol, PL/I, and several assemblers including BAL. IBM 360 (various models), CDC-6400, PDP-8 (varous models). Back when Big Iron was anemic by today’s standards and smaller things were even worse.

    Learning to program on systems with resource limitations is, I think, a good exercise for anyone planning to program professionally — which I barely do. I got an S.B. in theoretical math and went on to grad school in EE – not CS – writing 100 pages of F77 for numerical modelling for the S.M. and approx. 70 kLOC of 66% F77, 31% 68000 assembly and 2% Pascal for image acquisition and analysis for the Ph.D. (average rate of (not fully debugged) 100-200 LOC/day). These days I do some F77 modelling and some 8051 assembly for embedded instrumentation.

    My observations on OO are two-fold:
    1) you have to define a lot of objects — defining stuff TENDS to lead to write-only code. FORTH has been criticized for that. I don’t think it is inevitable, just likely. And the reason it is likely is:
    2) To do OO right, you have to work by a different paradigm, one that most programmers aren’t used to and most managers will NOT tolerate. The paradigm is to spend the first 60-75% of the project time defining, then criticising and re-defining the classes the their structure. Once you’ve got that both finished and correct, you can move on to the bulk of the coding. It takes discipline to rework the classes and throw away a flawed design and redo it right. Simple procedural language do not require such extreme care. You can hack things together in a simple form then update/expand/correct. Not that that is the best way, but it does work for procedural — it does NOT work for OO. You end up with write-only code and spaghetti class structures — I mean Multiple Inheritance — fer cryin’ out loud!

    Maybe my take is inherently flawed due to my internalization of the procedural view-point. When I started programming you could bump into not just twos complement, but ones complement, sign-magnitude, decimal (both fixed and float) — all in hardware. There were stack oriented machines and non-stack machines. Ones with lots of registers and ones with few or even none.

    K&R C basically took the PDP-11 hardware architecture and codified it into the language expectations. Later C standards have tried to move away from the limitations of that with decent success, but the presumption of binary twos complement, stack oriented, 8 bit bytes and multi-byte words has come to dominate both the hardware and software mindspace. That means people learning computer stuff since about the mid to late 70s TEND to have a narrower range of experience. Not all, some people get stuck working with 4-bit embedded processor architectures or start off with classical IBM mainframe stuff, although probably not BAL. But certain expectations are now so pervasive that they are no more obvious than water is to fish.

    I don’t consider myself a programmer, I’m an engineer who does some numerical sims and some embedded processor stuff – so my viewpoint on all this is skewed. It is further skewed by my having found something good enough (F77 with traces of C and assembly) for almost all my needs. I’ve occasionally pulled out AWK, but not too often. And while I have programmed in PL/I, Algol-60 and Algol-68, COBOL, LISP, SNOBOL, Basic, Pascal, and C with occasional dabbles in a few other languages, I tend to revert to what works for what I do — assembly for embedded stuff, F77 w/ the occasional C/asm addition for numerics and other things.

    So my take on all this is colored by my myopic experiences. However, I think that is true of a lot of commentators, and I don’t know how many of them realize it. I’ve heard all sorts of nonsense over the years (e.g., LISP for everything – yeah, right).

    So, if there is anyway to upgrade a programmer, I expect it involves breaking their mental limitations (necessary but not sufficient). Expectations for hardware, software and language behavior must all be broken so they can expand. That requires people going outside their comfort zone — which everyone tends to resist, but some resist more forcefully than others.

    There’s the old joke that “engineers can write FORTRAN-66 in any language” which is quite true. And Unix-bangers can write C in any language &c &c. It is when you have enough mental language paradigms that you can choose one appropriate to the problem at hand and write in that, no matter what language du-jour the Point Haired Boss demands, then you are an expert programmer.

    So if you want to do alternate hardware architectures, there are tons of simulators out there available through the retro-computing enthusiasts. Software can be a bit harder to find, but the same retro-computing people often have secured hobbyist licenses for use of things (such as OS/360). Certainly alternate language approaches are available, you can find AWK easily, SNOBOL or SPITBOL with a bith more work, PL/I sure, BCPL a bit harder &c &c Do you want to bend your mind into a pretzel doing mental yoga? Me, I’m doing a variety of things, including older electronics tech — vacuum tube electronics helps me design better electron and ion detectors for spacecraft — after vacuum electronics is vacuum electronics — even when I amplify the signals with semiconductors and control the system and process the data with an Atmel AT89S52.

    Give me a few more centuries, I may get there yet.

  33. In a business sense we generally get paid to write code to take a complex or cumbersome task and make it simpler or faster ergo as a programmer your job is to understand and break down complexity to make it simpler

    To hark back to the original post that’s got nothing to do with writing code. Coding, languages, metrologies are tools to make this happen

    My experience having managed large corporate teams of programmers has lead me to
    1) “best practice” is an industry sin. It leads to padding of project where no one has a view of what is really needed. Yes is makes a bad to mediocre progrmmer productive at the cost of hamstringing your really talented staff. Some projects can survive on process alone this is usually where the good programmers have left a legacy of well structured (architected code) and the project isvreally about tuning or maintaining what is there, this isn’t profitable use of the good programmers.
    2) sort of related I really rue the demise of the analyst programmer. Someone with the end to end view of what is needed. An aspect of a good to great programmer is they are often effective as a business analyst they might be a bit rough on the touchy feely but the can string the design implications together, it’s the other end of the same skillet
    3) a good programmer can be as much as 40 times more productive in a corporate environment ( sorry can reference the research directly but it comes via rob tomsett the extreme/agile project management guy)
    4) a big thing that holds back programmers is a sense of entitlement ( the making money thing) I can’t remember the number of times I’ve had the discussion that having 7 years of programming does not make you a senior it’s the attitude,maturity and taking responsibility for the overall delivery

    Sorry getting a bit random here I will try and summa rise
    Coding is just a tool, it is the”design” thinking to deal with complexity that makes great programmers
    There is a place for the mediocre, but sorry guys it is the hard graft boring stuff. Some of this is about doing your time and learning the ropes. In the old crafts it was called an apprenticeship, the trick is finding the right master(s) to learn from

    I do think good can be learned, it is about attitude, ownership and critical thinking

Leave a Reply

Your email address will not be published. Required fields are marked *