Towards the end of last year, Microsoft invited endjin along to a hackathon session they hosted at the IET in London as part of their AI for Good initiative. I've been thinking about the event and the broader work Microsoft is doing here a lot lately, because it gets to the heart of what I love about working in this industry: computers can magnify our power to do to good.
If you're not aware of it, Microsoft's goal with AI for Good is to use AI to build technology that is a force for good in the world. Like any technology, AI is neither inherently bad or good: what matters is what you do with it. AI can amplify your efforts. This initiative is all about helping those who choose to use these powers for good,
The key here is something that I think has always been the most important value of computing, and which is also, not coincidentally, endjin's unofficial mission statement: helping small teams achieve big things. That ethos is a large part of why I came to work for endjin.
Last year endjin worked with OceanMind on systems that monitor global shipping and can automatically detect patterns of activity that are associated with illegal fishing. Overfishing is a huge problem today, and needs to be stopped if we're not going to cause problems for generations to come by depleting fish stocks to dangerously low levels. The sheer volume of data that needs to be tracked to tackle this problem is daunting, and requires some creative technical thinking to stay on top of it. (If you'd like to know more, we'll be talking more about this at NDC 2020.) But by careful application of technology, it becomes possible for a relatively small company to handle it. By applying a variety of techniques (including, but not limited to machine learning), technology can sift through an ocean of data, extracting actionable intelligence, enabling specialists to spend their time looking at the data that needs the most attention.
This sort of work is just one of the areas in which Microsoft is working to create positive social transformation. The AI for Good project concentrates on four areas: Earth, Accessibility, Humanitarian Action and Cultural Heritage. While OceanMind was about the first of these, the Hackathon we went to fell into the Cultural Heritage category.
We were privileged to meet people from four wonderful UK institutions that Microsoft is working with in that last category: the V&A Museum, the Science Museum, the RAF Museum, and the RSPB. These museums have been an important part of my life, and now I'm enjoying seeing my children learn what they have to offer. And of course the RSPB is an organization whose work benefits the entire country.
On the day, we only had a limited amount of time, this being a one-day hackathon, but the three of us from endjin (me, Jonathan George, and Ed Freeman) got to work with the V&A on a couple of simple proofs of concept. Ed worked on anomaly detection techniques, with the goal of finding possible problems in the catalog. Meanwhile, Jon and I (well, mostly Jon - I
heckled provided valuable support from the sidelines) built an experiment to see whether the Computer Vision service, part of Microsoft's Cognitive Services offering could be trained to recognize certain aspects of items in the museum catalog—we mainly focused on the ability to determine what region a particular vase was from on the day. We had a couple possible applications in mind. One was an assistive tool to generate suggestions for some of the numerous fields that need to be filled in when items are added to the catalogue. Another was a tool partly inspired by the ideas Tristram Hunt (the V&A's director) expresses in https://www.youtube.com/watch?v=EDSyILvfCFc - the idea was to locate items in the catalogue that are related to one another, with a view to helping the people who write supporting articles that appear alongside the main information about the museum's objects in their online catalog. The thinking was that if AI could find non-obvious connections, that might act as the genesis for online articles that could help bring hitherto overlooked areas of the V&A's collection to the foreground. This could help find new ways for the museum's existing collection to continue to act as an inspiration to artists.
With only a few hours of coding time, we were only able to get as far as making the case that these ideas seemed viable. But it was notable how straightforward it was to get results with the online services. On the day, details such as how to work with the web API for the V&A's online catalogue took about as much time as getting the Computer Vision API to work for us. It seems that we have reached a point where AI is well and truly out of the lab and in the hands of any developer who can think of useful ways to apply it.
Just as the vision of early personal computing pioneers (not least Microsoft) of empowering individuals by getting a computer on every desk made the amplifying power of computers available to a mass audience, AI is now within reach to a wider audience than ever before. I look forward to seeing how people use it to magnify their efforts to make the world a better place.