Skip to content
Ian Griffiths By Ian Griffiths Technical Fellow I
A conversation about .NET, The Cloud, Data & AI, teaching software engineers and joining endjin with Ian Griffiths

In September I joined endjin a Technical Fellow (an entirely new branch in endjin's career pathway to accommodate me - more on that later). I've has been involved with endjin since 2011, as an Associate, helping to deliver some of our most technically challenging projects (and if you go even further back, I attended Cambridge University with endjin co-founder Matthew Adams).

In order to introduce myself to the team, I sat down with Howard for a Q&A session. This was originally published on LinkedIn in 5 parts, but is republished here, in full.

How did you first get into technology?

I have to thank my father for that. He went into computing directly after university (an unusual move in those days) in BP's operational research division, and worked in the computing industry for his whole career. He has always shared his joy in all things technical with me. We had various computing and electronics paraphernalia around the house throughout my childhood, and I was extremely lucky not just to have access to this stuff, but that I had someone happy to explain how it worked in as much detail as I wanted. He has always been a very practical man, and is always working on creating something. His enthusiasm is irresistibly inspiring.

Did your academic areas of study align with what you do now?

Eventually, yes. At school, I really only got to do electronics, not computing, and even then, only in the final few years. For what felt like an eternity, school and technology felt like opposites. Computing was a thing I got into trouble for because I'd rather be doing that than homework. So when I discovered that computer science was something you could actually do as your main subject at university, there was never really any doubt about what I'd do. That said, I think it's important not to get too narrowly focused. At A-level, in addition to Physics, Maths, and Further Maths, I also took Music, and I'm very glad I did. And when I went on to study CompSci at Cambridge, I continued my musical activities as a 'choral exhibitioner'.

What are the most interesting things you've been involved in so far in your career?

My first job sounds pretty obscure when I describe it, but it opened up fascinating worlds to me, forcing me to understand things that have been useful ever since. The involved writing kernel mode device drivers for ATM network cards. (That's 'Asynchronous Transfer Mode' by the way, nothing to do with cash machines.) Writing drivers requires a thorough understanding of operating system internals. Debugging them requires a great deal of discipline, because most of the debugging tools and techniques developers are familiar with either aren't available at all, or operate under severe constraints because you're effectively attempting to debug the OS itself. In a lot of cases it turns out that you also need to debug hardware, so you need to add oscilloscopes and logic analyzers to your inventory of debugging tools. Additionally, the fact that they were network cards meant I also became immersed in networking standards (I'm even a named author on an incredibly obscure network specification) learning things that continue to be useful today in designing and debugging modern applications.

The next significant event in my career was when I started teaching. I joined DevelopMentor, which is now sadly subsumed into a larger organisation, leaving a surprisingly minimal trace on the internet, but which at the time was the go-to company for learning about COM, and later .NET. Teaching something forces you to understand it in ways that you never will as a pure practitioner, because you are exposed to a much wider range of applications, and points of view. You will be forced to think about the material from many more angles. (If you have to pick one: teach or practice, practice trumps teaching for depth, but the combination is formidable. I presume this is why universities require academics to teach.)

I also have to mention my work at BSkyB (now Sky UK, the company that runs the UK's largest pay TV service). I was lucky enough to be involved in two pioneering projects there. I was the lead software developer on the user interface prototype for the first version of Sky+, their PVR product. This was the first time I was involved in a long-running process of user interface development, in which we would put a new iteration of our prototype in front of a focus group every week and find out what was good and what was not. This is a humbling experience that I'd recommend to anyone. (And I mean 'humbling' in the original sense, not the peculiar modern way people use it when describing how awesome they are. I mean I was regularly confronted with irrefutable evidence that I had not been as clever as I thought.) It is fascinating (if somewhat embarrassing) to put some aspect of a user interface that you thought you had refined to a crystalline state of clarity into the hands of an ordinary, sensible human, and then watch them misinterpret it in a way that is not just completely at odds with your expectations but which is also, with hindsight, entirely reasonable of them. I also worked on the first of Sky's interactive applications to support dynamic updates from multiple information sources. (Prior to that, interactive apps were based on a static 'carousel' model, not unlike how Teletext works but with more modern graphics. Updating text or graphics typically meant burning a new CD-ROM, and walking over to a datacentre to put it into the machine that would then repeatedly send its contents up to the satellite.) That was another fun challenge involving working at multiple levels of a technology stack.

I'm going on a bit aren't I? And I've not even got to the 21st century yet... In my defence I'm something of a greybeard, and have been lucky enough to have had a lot of interesting jobs.

Programming C# 5.0 is one of Ian's many popular programming books.

The next big development in my career was the arrival of .NET, and of particular interest to me was WPF. Having done a certain amount of UI work (e.g. Sky+), and what with my first book having been on Windows Forms, the first client-side UI technology in .NET, Microsoft invited me to get involved with WPF prior to its launch. Initially this meant getting various articles about the tech ready for when it went public, and it ended up with me co-writing O'Reilly's 'Programming WPF' book. Since first going public in 2003, WPF has been pronounced dead on many occasions. First it was going to be killed off by RIAs (Rich Internet Applications) in the battle fought between Flash and Silverlight for total domination of the UI. It has outlived both of those technologies. And it was then supposed to have been rendered irrelevant by Metro apps, which became 'Modern' Windows 8 apps, which begat UWP apps...but WPF continues to see considerable use because even though its underpinnings are considerably creakier than the newer native app platform, it ultimately continues to be more capable. My WPF courses on Pluralsight continue to be very popular despite being almost 10 years old. I think this is because WPF is a deeply interesting piece of technology. This probably isn't the place to expand on why, but in a nutshell, I think part of its enduring appeal is that more than any other UI technology I've worked with (and I've worked with more than most) WPF is mainly built out of other bits of WPF. (For example, if you look at the way a Button is implemented, it's almost entirely composed from other primitives. There's very little 'secret sauce', meaning that anything you write can do most of what the built-in controls can do.) It is a testament to the power of composability.

Programming C# 12 Book, by Ian Griffiths, published by O'Reilly Media, is now available to buy.

Microsoft Global DevOps Bootcamp 2018

In more recent years (and I've realised with horror that by "recent" I mean the last decade or so) cloud computing has become dominant. One of the interesting aspects of this is that it lets us see an age-old pattern in action: technological advances simultaneously destroying old jobs, while creating many more opportunities. Cloud platforms such as Azure automate away a lot of the drudge work of IT departments, but at the same time they enable things that would previously have been impossible. For example, I worked on a project designed for a world I know well: the world of teaching. If you're running a class or a one-day workshop about some developer technology, you want to have practical exercises—Hands-On-Labs (HOLs) as they're known in the trade—for people to try out the stuff you've been talking about. And one of the bugbears of this is ensuring that everyone in the room has a machine suitable for doing the labs. If the talk involves pre-release technology, you really don't want to ask people to put that on their laptop. So I was involved in a project (with endjin, as it happens) that used Azure to provision VMs pre-configured with the software needed for the class. Initially we were using this for events with up to around 50 attendees. But earlier this year, we used this system to provision thousands of VMs across the globe, for Microsoft's Global DevOps bootcamp event. It took just two (2!) people (Brian Randell and I) to manage the configuration and provisioning of the machines used by thousands of attendees at around 50 locations worldwide to do the labs. Just a few years ago, the HOLs would have been a major logistical challenge, likely involving an order of magnitude more people. Some people might look at this and think "You're destroying jobs." But I don't think that's so, because in practice these events would not have happened if we'd had to do it the older, more expensive way.

What are thoughts on the current state of the .NET Ecosystem?

I think Microsoft has caused a great deal of confusion with .NET Core (especially with the premature and slightly too gung-ho presumption that .NET Core is where it's at, ignoring the reality that a lot of projects really can't migrate to it just yet) but I think we're mostly through that now, and there are many things about the new .NET world that I think are much improved.

The cross-platform availability introduced by .NET Core is interesting. It opens up the possibility of using .NET in many new places. Of course, it's not the first time we've had cross-platform .NET, and the previous incarnation (Silverlight) didn't exactly become ubiquitous. However, there are a couple of things that seem different this time around. Silverlight had a relatively narrow range of applications—initially it just ran in the browser, and even when it branched out a little further it was firmly a client-side technology. By contrast, .NET Core can also run in servers and on small, embeddable devices. But I think the much bigger difference is that Microsoft has changed a lot in the last 10 years. For one thing, Microsoft now supplies development tools (both paid-for and free) that run on operating systems other than Windows—something like VS Code was unimaginable back in 2008. But one of the most visible differences, and perhaps one of the most important to the likely adoption of .NET Core, is that Microsoft has got fully behind open source software.

The systemic commitment to open source is a big deal. You really can create a fix for a problem, submit a PR, and see it turn up in a release in a reasonable amount of time. This is a step change from the old world in which all you could really do was lobby Microsoft and hope that they did what you wanted at some point in the next year or two.

From ".NET Core 3 and Support for Windows Desktop Applications"

It's not without its downsides. One of the benefits of classic .NET's 'one massive release every year or 3' model was that everything worked together. One of the problems I've found in the new world is that it can be hard to find a coherent set of versions. You might find that some NuGet package you'd like to use has a minimum version requirement for a shared component, but that some other package has a maximum version requirement that's lower than that (which might be explicitly stated, or might be more subtle, in that it just stops working if you have too recent a version of the shared component). I certainly seem to spend a lot of my time these days diagnosing package version conflicts. I think it's worth it—going back to the pre-NuGet days doesn't seem very appealing. But some days it does feel like package management hell is out of control.

I think the powers that be in charge of .NET Core today are getting better at not breaking things. However, I do wonder if, as increased .NET Core adoption forces them to be ever more conservative, they will find that through a series of incremental policy changes, they have accidentally reinvented the old .NET monolithic model in which rates of change are necessarily tectonic. Perhaps we will look back at the early days of .NET Core as a Cambrian explosion: a period of furious activity that created a great many interesting things, but which was actually a pretty dangerous time to be alive. And then frustration at the slow progress will inspire another generation of devs to break everything for a few years in order to make progress.

There's an important open question over .NET Core: will the newly open approach usher in a new era of open source projects built around the platform? Developers can be forgiven for thinking that if they create something interesting on top of .NET, Microsoft will produce their own version. This has long been a concern for commercial software development—with any product in the Microsoft ecosystem you need to ask yourself whether Microsoft is going to crush you like a bug by releasing a competing product—but there is a similar worry with open source projects. Why put months of your own time into something if you think Microsoft might release their own version, rendering all your efforts irrelevant? I think Microsoft has become more sensitive to this issue, become more aware of the benefits of supporting existing efforts instead of replacing them, but only time will tell if it has been enough to make people feel that it's safe to build interesting new platforms on .NET Core.

We're 10 years into the Cloud journey, what do you think the next decade will hold?

I confidently predict that pundits who make confident predictions will look like fools in hindsight.

More seriously, I can attempt to answer this question, but I think in practice I'll be answering the question "What problems are you having today that you hope will be solved?" I think there's too much ad hocery in distributed systems, leading to unnecessary flakiness. I think there is considerable untapped value in research—it often turns out that a lot of the hard problems in computing have been solved by academics decades before practitioners realise this. But to exploit this in practice requires it to be packaged in a way that doesn't require every developer to immerse themselves in academic literature. For example, Service Fabric enables you to take advantage of a lot of deep thinking about reliability of distributed systems even if you haven't read the papers it is based on. I think we need more of this, although the apparent trend towards infrastructure over platforms (e.g., IaaS vs PaaS, or the way containers are often used in practice) feels like an unfortunate step in the wrong direction. Gaffer tape will only get you so far.

The Introduction to Rx.NET 2nd Edition (2024) Book, by Ian Griffiths & Lee Campbell, is now available to download for FREE.

I'm old enough to remember when C++ first started to become popular, and there was a joke doing the rounds at the time: it's called C++ because the language has been incrementally improved, but people continue to write ordinary C in it. Superficially that's just a moderately witty play on the name of the language, but it expresses a deeper truth, one that is still relevant today: it takes a very long time to learn how to wield new powers effectively. And this isn't just a technical phenomenon—early television often involved nothing more than pointing a camera at exactly the same kinds of things people were already doing in plays or on the radio. It took years to work out how to create things that could only have been done on television.

It's probably too early to say what a 'cloud native' application really looks like yet. There has been an irresistible temptation to 'lift and shift'—to continue doing things exactly as you always have done, but replacing your company's data centre with Azure, AWS, or Google Cloud. To be fair, there's something to be said for this conservative approach: your chances of success might be higher if you don't attempt a root and branch redesign of your architecture at the same time as you migrate to the cloud, which probably explains the preference for cloud as infrastructure over PaaS. But now that using a cloud platform is, for many people, the norm, the obvious next step is to ask what new approaches are made possible now that we're in this new world.

Which technologies or paradigms do you think are powerful, yet haven't been widely adopted?

I've already mentioned Service Fabric. In some ways it seems ridiculous to say that it hasn't been widely adopted given how many Azure services run on it, but I don't think it has seen anything like the same level of adoption outside of Microsoft's own services. It embeds a lot powerful distributed systems learning, and packages that into a platform that is relatively straightforward to use, enabling you to benefit from the research behind it without necessarily having to read all of the corresponding academic literature.

To give a completely different sort of answer: Algebraic Data Types, and in particular, the combination of sum and product types. These are the somewhat academic terms for a pair of relatively simple concepts. Product types are in fact pretty much ubiquitous, either in the form of tuples, or record-like types. What's often missing is sum types, which are strongly-typed discriminated unions. F# has both. So does TypeScript, so arguably it's not so much that they're not widely adopted, as that I just really really want them in C#, which only has product types, not sum types. It's not instantly obvious why having both is so useful. It's really only once you've tried them that it becomes clear that they provide a very natural way to model information, and you wonder why they aren't a standard feature of all programming languages. Once you've got used to them, working in a programming language that doesn't have them feels unpleasantly restrictive.

And I have to mention one of my favourite technologies, the Reactive Extensions for .NET (Rx). I recently saw a tweet about Rx saying "Microsoft released incredible functionality for developers that every other platform picked up on except their own." This is sad but true. Rx is a brilliant thing: it solves a specific, important problem (how work with streams of events) in a way that is backed up by solid theoretical principles, but in a way that doesn't directly confront you with mind-bending abstract concepts - you're not obliged to write yet another essay explaining what a monad is before you can start using it. It is an excellent example of the kind of thing I've been alluding to in this interview - taking sound theoretical work and packaging it for easy consumption. The widespread adoption that various incarnations of Rx have seen outside of .NET are testament to the quality of its basic design, but it remains perplexingly underused in the .NET world from which it originated. People seem to think RX.NET 'abandonware' whereas in fact it's just "done".

There's a lot of hype around AI & Machine Learning at the moment. What's your take?

I think there's a fundamental philosophical mistake at the heart of the worst of the hype. And as ever, the hype can cause real damage by discrediting the technology. (AI has already been through that cycle once—hyperbolic claims in its early days caused the field to be seen as more or less irrelevant for a generation.) Many people genuinely seem to believe that we're on the verge of inventing superintelligent machines. If we are, I don't believe these will be direct descendants of today's AI or ML systems.

The mistake, I think, is an unsubstantiated belief that cognition is something that can be perfected if only we can replace our imperfect human brains with machines. (And actually this is an instance of a larger category of philosophical error: our predisposition to believe in perfectability. I suspect it's a handy shortcut revealed by evolution, but it can lead us to believe some crazy things.)

It is a mistake to believe in the possibility of perfect cognition not because I think it's definitely wrong, merely that we have no way of knowing right now whether it's possible. Unfortunately, people look at moderate successes such as recent progress in machine translation, and mistake this for mechanised thought (no doubt encouraged to do so by the slightly misleading 'deep learning' moniker that gets attached to some recent AI work). A serious look at the current technology reveals it to be no such thing—it reminds me more of early AI work such as the Eliza program, in which the surprising success of a highly circumscribed application leads to an inappropriately optimistic assessment of how far we've come. It's well worth seeking out Douglas Hofstadter's article in which he examines in detail how good a job automated translation software is actually doing. He is a lifelong proponent of the potential of machine intelligence (see his book, Godel, Escher, Bach) and he thinks current technology is not that.

I think it's instructive look at one of the more conspicuous AI applications where companies put their money where their mouth is: attempts to guess what you might like to buy next. It's just not that good. If you buy a washing machine, these systems attempt to sell you more washing machines for the next few weeks. This is understandable, but it illustrates that this is not any deep kind of intelligence.

In particular, this isn't going to be improved by throwing more compute power at it—even if Moore's law wasn't in its death throes, it still wouldn't save us. Throwing more power at this won't produce a next-level higher intelligence. All you get is mass-produced stupidity.

In fact, I think there's a great deal of value in the current systems, but we need to be realistic about what we currently know how to do.

What does democratising ML look like & is this something only the big vendors can deliver?

I'm reminded of a well known quote here, and yes I know it's a mistake to equate ML and AI, but it's relevant to both: "As soon as a problem in AI is solved, it is no longer considered AI because it works." (What seems less well known is who first said it—I've been unable to find a reliable citation.) The quote is usually invoked as a lament—this is sometimes described as the curse of AI—but I think it may be an important part of democratisation. We'll know that the successful democratisation of ML is complete when nobody thinks to call it ML any more.

(I'm also reminded of a time at college when some engineering students were horrified that the CompScis hadn't heard of Finite Element Analysis. But it wasn't that we were unaware of the technique, we were just surprised to discover it had a name.)

There is much to be gained by embracing asymmetry of effort: developers stand on the shoulders of giants every day, so much so that it's easy to ignore. We may be pleased with ourselves if we construct a clever query against some data model and project the results into a useful graphical visualization, and it's easy to forget how that's possible. Decades of graphics research and development lie behind our ability to present information on a screen; ditto for the storage and search systems that make complex queries against huge data sets possible; it's bizarre that we just take for granted the ease with which networking technology enables us to connect up the pieces of a globally distributed system; even the basic step of writing a line of code has a long a complex history language and compiler research. And it's all underpinned by the numerous streams of basic research that make computers real, and not just an abstract dream of mathematicians.

It's important, this ability to ignore, or even be completely oblivious to the monumental efforts of legions of researchers and developers that made your work possible. It has always been the key to democratising technology. Ideally you want asymmetry so large that you can't even see it—that way, the ratio between the amount of work you have to do and the effect you can have is maximised. I expect it will be the same with ML.

There's often a prolonged phase in which it's all a bit clunky (as was brilliantly satirised in this cartoon from Monkey User). I think we're currently at a stage with ML where a lot of the rough edges are still exposed, and we haven't yet worked out the best public face to put on top of the details. But that will improve rapidly.

This productive hiding of asymmetry can operate at a couple of levels in ML (and also with other branches of machine intelligence). There's the usual step of being able to exploit the fruits of other people's research, and this is possible today: you no longer have to read research papers and then write your own implementation of the ideas in them to do ML, because other people have done this and published tools you can use. But the second level is perhaps less obvious: even having made a particular technology choice, there's still an asymmetry between building a successful ML model, and using it. Developing a model requires a particular set of skills, and a certain scientific approach. But you don't need all that merely to use the resulting model. And you can often see a corresponding asymmetry in the computing resources required: the learning stage can often involve a lot of time on powerful hardware, but the end result may be a model that can be run on a low-powered embedded device.

This suggests that we will see something akin to libraries in the ML space—a shift from a world in which most users of ML are building their own models to one where you just search 'MLGet' for a model that suits your needs. And if we take a step back and look at the broader world of machine intelligence, you can see examples of this today. There are specialized services for processing natural language, or for image categorization, in which someone else has already done the bulk of the hard work. (There's still a certain amount of training required to tailor these things to your needs, but it's a world apart from starting from scratch.)

Is this something that only the big vendors can deliver? I'm not sure. The need for asymmetry suggests that deep pockets need to be involved somewhere, but that doesn't necessarily mean big companies—most of the older industry examples I gave above began with university research for example. And perhaps some fraction of the ML startups will produce useful fruit. Perhaps big vendors have an important role to play as curators, because the choices they make will have a strong influence on what ordinary developers see as ML.

What are your views / approach to helping the next generation of software engineers?

One thing I think people of my generation in particular need to remember is how much more stuff there is to understand now. The home computers of the 1980s that I learned on were relatively simple devices. The first one I had regular access to had just 1KB of RAM (and 8K of ROM to hold the built-in BASIC interpreter, and rudimentary OS). These are the kinds of systems where you could conceivably remember what every byte is being used for. Today's Raspberry Pi is as complex as mainframe computers were back in the 1980s. And the Pi was designed specifically to fill the void left by the computer systems people were able to learn on in the 1980s.

I am not trying to bash the Raspberry Pi—I think it's a great hands-on way of learning about computers. It's just that everything is more complex now. This concerns me because I think that to succeed as a software engineer, it's important to dispel magical thinking. When you first start to use computers, so much does seem like magic, not least because the ability to ignore the details of some abstraction is critical to productivity. However, there's a subtle distinction between choosing to ignore details so that you can think at the level of a particular abstraction, and not actually having a choice about it because you don't understand how that abstraction is provided.

I remember a moment of revelation back when I was wiring up a simple 8-bit computer on a breadboard as a teenager. My father explained a detail (what address decode logic does, and why) that finally removed the last mystery that had been stopping me from connecting what I had learned about the basic operation of transistors with higher level stuff like how a particular sequence of machine language op codes could cause an LED to light up. (Not that this removed all mystery: whether you build your computer out of transistors, relays, or dominoes, you're still left with the question of why physics works how it does, which I think is still unresolved.) This was a critical moment in my career, and as computers have become ever more complex, I can't help thinking that each generation has a much harder path to reaching this sort of demystification lightbulb moment.

I think there's a similar problem at a broader scale: the amount of stuff you're dealing with at once in even a fairly modest web app can be quite daunting. When my generation got started, you were most likely using a computer that wasn't connected to anything. The WWW didn't exist when I started out. And when it did appear, it was all static content. It took years for client-side code in a browser to become a thing, and then a while longer for that code to be able to communicate with servers, and longer still for JavaScript frameworks to become a thing. My generation had years to adjust to each of these developments. Developers starting out today are expected to assimilate all of this at once, and to contend with the fact that the JavaScript framework that was where it's at when they started work this morning was considered passé by lunchtime, and had been superseded by no fewer than 3 more exciting frameworks by the end of the day.

Someone with a more organised mind than mine might think that the solution would be to craft a carefully thought out learning path that introduces new developers to everything they need to know in a manageable, incremental, and systematic fashion. That might work for some people, but I suspect that the prospect of spending years learning the ropes before ever getting to contribute something useful would put a lot of people off. In any case, my experience of teaching developers indicates that they usually get much more out of training if it's in response to a need than in anticipation of one: not only are you better motivated if you know that the material solves your problems, you've also got a practical context into which to slot the things you're learning.

So maybe what we need is a collection of little islands of learning that you can visit to understand just one particular thing once you've run into the need for it. Or maybe my generation just needs to be better at remembering how long we had to learn everything.

What do you particularly enjoy about technology right now? What makes you think "if I'm working on that, I'm going to be having a good day"?

I am terrible at predicting what work I'm going to enjoy the most. I always have been—even going back to my student days, the parts of the CompSci curriculum I enjoyed most were often the parts that seemed most dusty and boring in advance. I think the philosophy of QI Ltd, the production company founded by John Lloyd (a TV and radio producer involved with an astonishing string of classic shows) is onto something: everything is interesting if you look at it carefully enough.

The best days are the ones where I found something unexpectedly interesting.

What is the number one thing you'd fix about developer life (as an individual)?

A 'one size fits all' approach to work hours and arrangements. Working 9-5 in an office doesn't suit everyone. Nor does working from home. This has been one of the biggest factors causing me to reject full-time employment for so much of my career. Endjin is unusual in recognizing this, and in creating a culture that supports many different lifestyles.

And what about the developer world as a whole? (e.g. everyone seems to be a middle aged white English speaking man)

I think we need more empathy.

I mentioned earlier that it's easy to forget how much new starters have to learn today. One fairly visible symptom of this is the level of hostility newcomers often encounter when they ask questions in online forums. I know some brilliant engineers who considered giving up because of the consistent barrage of unpleasantness they encountered, and it seems likely that many people will decide it's not worth the fight. Unnecessary departure of talent is a loss to the industry, and a system that is likely to ensure that only the most pugnacious make it through is unlikely to improve matters.

Empathy is a vitally important part of the job: the majority of development work involves trying to produce something that is useful to someone else. How are you going to do that if you can't see things from your customer's perspective?

I think this is inextricably linked with diversity. The more diverse your team is, the wider the range of perspectives you're likely to be able to bring to a problem, which will help everyone in the team broaden their personal horizons, not to mention being more likely to provide exactly what the customer really needs. And I believe empathy within a team can promote the openness necessary to increase diversity as it grows.

I realise that hanging over these fine words is the fact that I'm a white, male, privately educated Cambridge graduate. In the popular computing gaming analogy, I'm playing the game of life on the easiest setting. It's particularly important for people like me not just to be aware of that, but not to be lazy when looking for the potential in others: it's easy to assess the capabilities of people who've had more or less exactly the same background as yourself, but we need a bit more imagination than that to grow as in industry.

Whose books do you recommend that are perhaps not so widely read by "developers" (as opposed to CS majors)?

Most CompScis are told to read books on data structures and algorithms. The book of that very name by Aho, Ullman, and Hopcroft was on the reading list I was sent before starting at college, and I think it still has considerable value today. Good data structures are often the key to good technical solutions.

A good understanding of computer networks is vital today. For CompScis of my generation, "Computer Networks" by Tanenbaum was the essential text. (That said, the last new edition was in 2010. Unfortunately, I've not read any networking books for a while so I don't know if there's a better more up to date alternative. Even so, a great many of the fundamentals have changed only incrementally since then.)

I would highly recommend "The Annotated Turing" by Charles Petzold. The heart of the book is Alan Turing's seminal paper in which he introduces the idea of a universal computing machine. I think anyone working in this industry should have at least some understanding of the ideas in that paper. I wouldn't necessarily recommend reading the original paper on its own, but Petzold's book is a brilliant and very accessible treatment that provides all the context you need to understand and enjoy what is arguably the most important academic contribution to the world of computing.

And of course, everyone should own several copies of every book I've ever written.

What advice would you have for young people starting out on their career, or maybe heading to university to study CS in a few weeks time?

Have fun!

Don't be put off by show-offs. There will be some people who have some experience of programming, perhaps as a hobby, or in a gap year. And some of these will try to make you feel inadequate by flaunting what they've learned. It is bluster, and you can safely ignore it.

This next bit is specifically for those going to university. This might not work for everyone, but I'll throw it out there because I found it incredibly helpful: I wrote extensive notes after lectures—I was essentially asking myself "Can I explain this thing to someone else?" In cases where I thought I'd understood something, this often revealed that I didn't. And in cases where I knew I hadn't understood something, writing often helped. And it invariably made the next lecture in the series much easier to process. I only started doing this in my 2nd year, and I wished with hindsight that I'd done it from the start.

And whether you're at work or college, wander off the beaten track on a regular basis. Some of the most interesting and exciting stuff may not be directly on the critical path to your next deadline, and you will have a much better time if you sometimes stop and investigate something interesting that you glimpsed in the corner of your eye, instead of walking past in dogged pursuit of what someone else has told you is the goal.

Why did you choose to join endjin?

I've worked on several projects with endjin over the last 7 years as an Associate, so I already know a lot about the company. endjin works on exciting, challenging projects. It hires great people and provides an environment in which they can continuously improve their skills and broaden their experience. Its founders understand the importance of work/life balance, and have worked hard to ensure that the company supports and promotes this. I have had huge admiration for endjin for the whole time I've been involved in it, which is why I'm very excited to become a proper part of it.

What is your role in endjin?

My job title is 'Technical Fellow' and in a nutshell, my role is to lift endjin's technical capabilities. In practice, this means I'll be working with technical staff at all levels, looking for ways in which we can do things better. I'll be doing hands-on work, getting fully involved with projects, but in addition to working on customer projects and product development, I have additional goals around improving the company's processes and technical culture.

Endjin has a well-defined set of career pathways, which accommodates various different ambitions. The 'Technical Fellow' role is part of the technical career track. To quote from the endjin's career pathways definitions, this is "for individuals who display and want to continue to focus on the highest levels of technical knowledge, expertise, and excellence." That's a good description of what I strive for. The idea that you shouldn't be required to become a manager to advance in your career isn't a new one—technical career tracks have existed for decades—but sadly they are not the norm.

What do you hope to accomplish in the next 3/6/12 months?

In the first three months or so, aside from doing actual development work helping to move projects forward, one of the first areas I'll be working on is supercharging our DevOps processes. There's plenty that's already done well, as DevOps and DevSecOps form a foundation of how endjin approaches Cloud development, but having worked on a new product as an Associate over the past year, I've seen that we could do a much better job of harvesting, packaging and re-sharing what we've learned on previous projects, and making it easier to apply that to new ones. I see this principally as a matter of communication—having written numerous books and written and delivered a lot of training courses, I believe that effective communication can multiply the effectiveness of expertise. (This is not to say that endjin has poor internal communications—far from it, it's just that there are ways it could be improved.) Once things are documented, it's much easier to automate and reuse. So in the next quarter or so, one of my goals is that it should be much easier for anyone starting a new project to set up a DevOps pipeline that take full advantage of the expertise and intellectual property that we already have.

In the 6 months time frame, I hope to have effected a visible shift in the internal communication culture—if I get the work I just described right, the tangible benefits should be changing the defaults about what gets shared and how. I'm aiming to establish some momentum in a virtuous cycle of sharing knowledge. And by this time I should have started further initiatives aimed at continuously improving the quality of our work (but right now it's too early to say what those might be—part of my work in the first few months will be identifying the opportunities for improvement).

I'm also aiming to make useful contributions to endjin's core Intellectual Property we develop internally to accelerate progress across all projects. And there's also the product I've been working on recently as an Associate, but we're not quite ready to talk about that publicly yet.

Ian Griffiths

Technical Fellow I

Ian Griffiths

Ian has worked in various aspects of computing, including computer networking, embedded real-time systems, broadcast television systems, medical imaging, and all forms of cloud computing. Ian is a Technical Fellow at endjin, and 17 times Microsoft MVP in Developer Technologies. He is the author of O'Reilly's Programming C# 12.0, and has written Pluralsight courses on WPF fundamentals (WPF advanced topics WPF v4) and the TPL. He's a maintainer of Reactive Extensions for .NET, Reaqtor, and endjin's 50+ open source projects. Ian has given over 20 talks while at endjin. Technology brings him joy.