Thoughts From London Tech Week and AI Summit 2023
Photo : Pete Domican
Welcome to a slightly delayed newsletter this week. Deliberately so, as I was attending London Tech Week and the AI Summit in London and wanted to give some immediate impressions.
London Tech Week
London Tech Week at the QEII Centre in London is a big centrepiece event not just for the industry but the UK and the UK government. Many nations, as witnessed by the stands there, are trying to position themselves as a global leader in technology and the UK especially so in the aftermath of Brexit.
Prime Minister Rishi Sunak was the opening act and, to his credit, spoke well and credibly about the role of tech and AI. Various government ministers followed suit in later sessions throughout the week as did the Leader of the Opposition.
It’s a shame that the tech didn’t extend to the venue and, in particular, some decent air conditioning. Its proximity to Parliament clearly benefited the fleeting visits from politicians but a conference spread out over 6 cramped floors with such a huge number of visitors isn’t great at the best of times.
The problem for ‘Tech’ is that like ‘digital’, it’s becoming increasingly meaningless because everything is ‘tech’ and ‘digital’ so it’s hard to know what you’re looking at. London Tech Week seems more a place to be seen and be seen.
I may just have been grumpy, due to the free sauna, but I didn’t particularly glean much in the way of insight other than a sense of scale and major change. A better venue would have emphasised that more.
AI Summit
In sharp contrast to the main London Tech Week event , the AI Summit (incorporating the nascent Quantum Computing field) at Tobacco Dock was a much richer and more rewarding experience, both as a venue and in terms of content. Although referenced by virtually every speaker, it was nice to get away from ‘AI = ChatGPT’ and get to a more nuanced and wider discussion of what AI can and can’t do. My major takeaways were as follows.
Firstly, much of the discussion around AI (through Chat GPT) relates to data already on the Internet but, for larger well established firms, it’s leveraging the data within your own company that is potentially more interesting. All the ‘stuff’ that’s filed away, that might be relevant but is difficult to retrieve via keyword search, becomes much more accessible. That new Request for Proposal (RFP) becomes a much less tedious process if you can easily extract every similar previous case and presentation and use AI to produce your first draft response.
The second is that AI can produce a step change in optimising processes and techniques based on past experience e.g. every construction project you’ve done in the past produces a far richer source of learning. The ‘lesson learnt’ exercise, usually conducted at the end of a project, and usually swiftly forgotten, can become far more valuable and if incorporated into each project step, then becomes a more valuable resource to the next team that has to do something similar.
Related to this is the ability to complete research and trials in a much shorter period by reducing the time required to complete the data analysis and change the solution according to changes in user behaviours e.g. traffic light sequencing throughout the day.
While all of this wasn’t necessarily ‘new’ in terms of concept, the various use cases throughout the day really got across the sense of scale of potential deployment across every industry. If we think about climate change, not only does it offer the possibility of new solutions in the longer term, but to significantly improve our current use of fossil fuels by making more intelligent use of our transport systems. Many London bus routes and rail timetables have hardly changed in decades and many cars sit in traffic jams pumping out CO2 which is bad for the local environment and the climate.
Developing new ideas faster and radically optimising the things we do now across all industries simultaneously really is something that we’ve never seen before. There’s no doubt of the potential benefits but what of the risks?
AI Regulation
Having stuck my neck out on AI regulation last week, I was keen to see how practitioners saw regulation. My overall impression from them was that there’s an acknowledgement that there are concerns but that they’ve perhaps been overplayed.
A number of the speakers made the argument (and very convincingly) that AI is yet another disruptive technology and that they had developed internal frameworks to ensure responsibility for their models and principles for use. Paul Lincoln, 2nd Permanent Secretary at the Ministry of Defence made the point that the genie is out of the bottle and that the pace of development will be set by the bad guys (my paraphrasing not his), which is understandable from a defence perspective.
Glen Robinson, National Technology Officer, Microsoft gave a really interesting talk in which he established a lineage from Cloud Computing through ‘Low Code / No Code’ to the era of AI and AI co-piloting with a comment that AI was the ‘democratisation of technology through natural language’. From a glass half full pov, that is saying that anyone can use AI to better do something and from a glass half empty pov, that is saying that anyone can use AI to better do something. In the same way, anyone can have a knife, whether it’s used for good or bad depends entirely on the person using it.
The UK is to host the first global AI regulation summit in autumn so we’ll see where this goes to but I feel that many companies are, understandably, focussed on what they’re doing and not the cumulative impact of unleashing this technology across all aspects of life simultaneously and that we ought to be thinking about ‘soft harm’ as well as the physical harm.
One of the most considered views on this subject came from technologist Kevlin Henney who posted the following reply to last weeks article on LinkedIn
Rather than attempt to define AI/ML/whatever and to regulate the software itself, we should concentrate on the inputs and outputs. We should build on and reinforce existing legislation, regulation and other forms of restraint, constraint and moderation — this is often easier than introducing something new that is potentially more contestable, but also more likely to be slower in coming.
When it comes to outputs, it is the question of application and context. In the contexts where 'harm' is well defined, such as medical and automated industrial systems, we already have a foot in the door. In other words, regulation doesn't have to focus on software architecture, but on outcomes in particular contexts.
It is easier to make a stronger case for talking about other forms of personal or social harm in terms of other laws, such as those relating to equality, etc., and expand those. Again, bringing software into the fold, but without referencing its architecture (thus avoiding the AI-definition trap).
When it comes to inputs, we should recognise that most AI systems are based on big data, and it is the nature of that data that we can regulate and apply (or remind ourselves to reapply) legislative effort.
As an individual, I have data rights, and so the question is not one of AI but of whether a company can freeload off my personal data to its own benefit without my consent, and whether I can withdraw such consent.
If I am a creator (music, art, code, etc.), I have rights to my work. Is it fair use for someone to create a software system for their own commercial benefit where my work was involved without acknowledgement or recompense? In other words, the focus on intellectual property rather than AI.
Interestingly one of the speakers said something similar about looking at the outputs and running the equivalent of medical clinical trials to see the effect. We’ll see where this one goes but, for the time being, I think we need a regulatory framework around a technology that will be applied simultaneously across all aspects of society and the problems of social media should be a warning light.
Diversity in Tech
Despite the very best efforts of the organisers at the two events in terms of speakers and the prominence given to diversity, actual diversity in Tech seems somewhat more elusive as Kevin Withane of Diversity X pointed out in his talk at AI Summit. Diversity X’s raison d’etre centres around the following pov.
At Diversity X, we do not refer to founders that are women, people of colour, LGQBTIA+, disabled, neuro-diverse, or older, as "underrepresented". We do not believe that founders from these demographics are underrepresented, but instead, we believe they are overlooked and underestimated founders. That is why we refer to them as Underestimated Founders™. The underrepresentation comes in the equitable allocation of capital.
Diversity X website
Kevin’s talk went through a number of examples illustrating this e.g. less than 2p in every £1 of UK equity funding went to all-female founder businesses in 2022.
Diversity isn’t a box ticking exercise. We have to sort this out. If AI represents the ‘democratisation of technology…’ then the allocation of capital and jobs in the industry need to reflect everyone in that democracy and not just particular sections of society.
Plea to Exhibitors
For God’s sake, tell us what the hell it is that you do as a company on your display and who you want to talk to! I walked past so many stalls because they had a meaningless guff tagline like ‘Providing accelerated solutions’.
One stall had an array of mugs and socks (yes socks!) but no ‘customers’ because the stand didn’t give anyone a clue what they were there for. Unless you’ve fallen on hard times or you’re Dobby the house elf from Harry Potter, why would you want a pair of branded socks?
If you’re not a big brand name and/or it’s not immediately obvious why you’re there, then tell us what you do, what problem you solve or who you want to talk to? Hats off to the Government of Canada for ‘Invest in Canada’, a tagline that isn’t going to win any awards but at least told me what they wanted to talk about and at least made care enough to go and have a look.
Final Thoughts
These two conferences along with the Genomics conference I attended earlier in the year really cement the thought that we are on the cusp of an inflection point in technology from the steady progression from the advent of personal computers to present into a world of exponential change in both the technology and the way we interact with it.
I’m hugely optimistic about the potential of Tech, and AI in particular, but less so about who is going to benefit. If we are able to raise living standards for the population as a whole and solve fundamental problems of reducing poverty, tackling disease and climate change, improving education etc, then I’m all in. If, however, the benefits of Tech accrue to a cadre of founders, venture capitalists and the world’s wealthiest, then I think we are heading for a huge problem of our own creation.
For me, the threat is less likely that AI destroys humanity Terminator Skynet style. It is more likely that we destroy ourselves fighting over the distribution of those benefits, just as we have throughout human history, each time with more advanced weaponry (and AI could produce some fantastic weaponry). As Paul Lincoln said in his talk AI requires some ‘serious human brain power’ and that applies not just to developing the technology itself but how we best deploy it.
Until next time
Pete