For me, there are some significant differences, including:
Mega-capitalization, multi-national, market-dominating companies are leading the sector
The major LLM/genAI developers are not following a rapid angel investment => single round of VC investment => go IPO with beta product corporate strategy
Retail investors and buy-side insitutional investors do not have much access to LLM/genAI investments
LLMs/genAIs are being highly integrated into mainstream products, such as mobile phones, web browsers, and traditional search engine results that are part of most people’s daily routines
In addition to individual users, LLMs/genAIs are rapidly being adopted by businesses, large and small.
It reminds me more of the marketing preceding Year 2000. I don’t mean the entirely legitimate and underappreciated work put into Y2K remediation.
For several years leading up to 1/1/2000, it seemed like every company in the IT space was claiming that their product was just what you needed to get Y2K ready. And there was a definite effort to cram some kind of Y2K selling point into every product, regardless of whether it was a good fit or just marketing drivel.
After the dot-com crash, the Internet still exists. The strongest companies survived and became giants (Amazon, Google, eBay, etc.). The bust just cleared out the unsustainable ones.
I think AI will be similar: once the hype settles, the practical, valuable uses of AI will remain, and the strongest players will keep shaping the future.
Yes, we ended up with today’s Internet. Sure, there were lots of high profile failures along the way, but that’s always going to be the case when venture capital is betting large sums on future rewards.
On September 4, Scientific American published an article (which I’d link to – except that it’s behind a paywall), which is worth reading:
The New Frontier of AI Hacking—Could Online Images Hijack Your Computer?
Artificial-intelligence agents—touted as AI’s next wave—could be vulnerable to malicious code hidden in innocent-looking images on your computer screen…
An example shows how images altered (in an experimental setting) to include embedded prompts could be sufficient to “trigger the agent on someone’s computer to act maliciously”. And how forwarding such an image can infect the computer of the person you forward to.
When the dot-com bubble burst a large wave of dot-coms folded, and only a few survived to become giants. Stock prices of telecom companies dropped to a few percent of their peaks and some of the old giants failed, among them Northern Telecom, Lucent and MCI Worldcom. The dynamic of the tech bubble was that a huge wave of startups were launched, most folded or were bought, and a few percent survived to reap the profits. The ecology of technology business now is that many seeds are sown but few grow into thriving businesses – and some technologies never get off the ground. Remember 3D TV?
The link is here. If you have Apple News+, you can find it here.
Even if something is behind a paywall, please give out the link. If you subscribe to the publication, you may be able to gift a link to the article. The paywall may be semi-permeable. I have found that Reader View in Safari often reveals the content of a paywalled article. News+ subscribers may be able to access the article in Apple News via the Share Sheet (or Share icon on a Mac). For example, I don’t subscribe to the Wall Street Journal, but I do subscribe to Apple News+. So, if someone provides a WSJ link from the share icon, I can usually open it in News+.
So always provide a link; if you know it’s behind a paywall, just warn people.
Thanks for the link. One of the first things that jumped out at me is that the author seems to seriously embrace security-by-obscurity:
The author clearly has no real-world experience with cyber-security.
Yes, open source makes it easier for hackers to find vulnerabilities, but it also makes it much easier and more likely that someone else will fix that vulnerability.
Closed-source doesn’t mean a hacker can’t find security holes. It only means that when the holes are found, nobody other than the owner will be able to fix it. Which usually means that fixes often don’t happen until a high profile exploit embarrasses the owner into fixing his bugs.
The article also seems to thing that AI agents are going to behave like humans, clicking on icons:
This sounds like a completely brain-dead way to implement an agent.
If I wanted software to do things on my behalf, I would expect it to run in an environment where it directly accesses the various APIs it needs to get its work done. Simulating a human by reading screens and clicking on icons is a massive waste of computer resources.
Finally, the author assumes that an agent is going to blindly execute whatever text it finds hidden as if it was a command. Who would ever think that’s a useful feature?
I get the point of the article, but it reads like click-bait scare tactics to me.