float4 a day ago

> Bezos nailed it on this topic: “[...] [I]n our retail business, we know that customers want low prices, and I know that's going to be true 10 years from now. They want fast delivery; they want vast selection. It's impossible to imagine a future 10 years from now where a customer comes up and says, 'Jeff I love Amazon; I just wish the prices were a little higher,' [or] 'I love Amazon; I just wish you'd deliver a little more slowly.' Impossible. And so the effort we put into those things [...] will still be paying off dividends for our customers 10 years from now. [...]”

> You should consider what won’t change, and the following is a (non-exhaustive) list of things that I think won’t change: I believe AI is and will continue to gain intelligence

Okay, but that way you can frame every ongoing change as a constant. "Change X will continue, and because it's already ongoing and will simply continue, I consider it a constant and therefore add it to my list of 'things that won't change'". But that's clearly not what Bezos meant.

  • Etheryte a day ago

    I think it's pretty easy to see the statement is an oversimplification to the point where it loses pretty much loses all value. Bezos says customers want vast selection, but most would agree that the reason Amazon is garbage these days is because it's flooded with cheap crap. The selection is vast, but the pile of dung is so large that it's practically impossible to find a good product hidden underneath the rest of it.

  • InkCanon a day ago

    I find Bezos' statement to be a bit oversimplified. For example Temu (and virtually every other Chinese E commerce) wipes the floor with price and selection. Costco is cheaper than Walmart. Yet Amazon is vastly larger than both.

    • dbspin a day ago

      Agreed. Without negative caveats, such positive statements are meaningless.

      Customers want cheap goods. Caveat: They don't want (to know that) those goods are produced by slave labour.

      Customers want a vast selection. Caveat: This should not include fake, shoddy or misleading listings

      Customers want rapid delivery. Caveat: And they want it cheaply, or ideally for free, without breaking their stuff, at times they are home or in a manner they can receive the goods while away from home.

      Etc.

  • torginus a day ago

    Sorry this is a bit off topic (but relevant to your post).

    I'm not American, but what do people like in Amazon, as in the retailer?

    I have experience with the German Amazon, and often they're not the cheapest, they often don't have stock of the most popular items (as in the stuff you'd actually want, like iPhones or NVIDIA GPUs), and same day delivery, while nice, is something I can usually live without (and I'm willing to trade it in exchange for lower prices).

    They seem to have an endless back catalog of cheap and cheerful mystery products of dubious quality, but I hardly consider that a decisive competitive edge.

    • spacebanana7 a day ago

      Amazon is excellent at selling physical books. I can order pretty much any vaguely popular book and have it delivered the next day at a price rarely higher than anywhere else.

      That’s Amazon’s core business philosophically, everything else is an add on or side project that happened to be profitable.

      I think that just like the original sin of web development is trying to run apps in a document browser, the original sin of Amazon is trying to sell everything in a bookstore.

    • whiplash451 21 hours ago

      Lowest click-to-package-at-my-door number (especially for books).

    • dboreham a day ago

      US Amazon isn't like that, but iphones and short supply GPUs aren't widely available anyway. Apple controls where you can buy an iPhone and NVidia controls who gets GPUs.

    • sofixa a day ago

      > I'm not American, but what do people like in Amazon, as in the retailer?

      I'm not American either, but I use Amazon.fr occasionally. It has going for it:

      * it's a trustworthy site. If I order something, I'm 100% sure I'll get it or get my money back. If I'm looking for something rather niche like an ESP32-S3 microcontroller, it beats buying on it vs a random site I've never heard of before that it will have longer delivery times and might be a scam or might have nonexistant support

      * it has a large catalogue. I can buy coffee, kimchi, small electronics (PWM servo motors), larger electronics (toaster), power bank, USB C charger, mouse, outdoor furniture. It's easy to buy all sorts of stuff off it without hunting specialised physical stores or a ton of different websites. (of course for some things I know and already trust various websites or stores, so I buy off them; but for more generic or niche things, Amazon is pretty good)

      * support, returns, delivery are all very good and there is barely anyone that is even close.

  • whiplash451 a day ago

    100%. This sentence in particular seems at odd with looking at constants:

    > “Better product”: We need to define "better" clearly, but if you're basing this off your R&D efforts, I would very much fear the competition coming my way. If someone can use enough compute to copy you and use AGI to make a product better than what you currently have, is it still "better"?

    IMO, better products is actually a constant that is anti-fragile to AI. Better products remain the best way to gain market shares for the foreseeable future (alongside solid marketing, ops and finance).

  • sillyfluke a day ago

    Yes, definately. I find the lack of discussion about time frames as totally unserious. Their starting assumptions could be all valid if clairvoyantly made in the 90s and they'd still be utterly useless in helping startups make decisions for that decade. However, if they knew there would be significant breakthroughs in the early 2020s, well that'd be something else. Though you know, they'd have to find some random ways to stay alive until then.

    Bezos is making assumptions about human behavior in that quote, and those assumptions seem instantly obvious to any human who is asked, regardless of their experience or expertise with any business whatsoever. There is no instant validity possible with the AI assumption.

  • trash_cat a day ago

    > You should consider what won’t change, and the following is a (non-exhaustive) list of things that I think won’t change: I believe AI is and will continue to gain intelligence

    I think this is a miss-representation of what he meant. Given that AI will be capable and prevalent (cheap intelligence) what are the factors that remain constant? He goes a lot into demand for physical things, like resources and/or supply chain, which is true. If anyone can relatively easily create a digital service then those with capital and physical resources will have bigger moat.

    I personally think what will happen with the demand for digital services with intelligence being cheap.

  • sgt101 a day ago

    (total side track) There are other things that some customers want though:

    - for the recommendations to offer me things I want or need, not things I just bought

    - to be able to evaluate the quality of items rather than just the price of items

    - for Amazon to extend it's brand around the items that I buy. "Amazon Recommends" is just so weak and offers no assurance or opportunity for loyalty. It's more or less meaningless and I suspect it's something that suppliers buy.

    As every in business it's very difficult. I know that Amazon is humongous and knows it's business inside out. I am sure that Amazon insiders just feel tired reading other people's ideas about what would make things better, but on the other hand I do think that the narratives of business inevitability (and AI inevitability) are just false. Yes they have triumphed until recently, but what's happening in China really does undermine the idea that the future will be everyone just grifting to everyone else for a dime while the big corps enshitify anything that emerges from the primordial ooze.

    Not that I think that what's happening in China is good.

Terr_ a day ago

> Even if we’re being super conservative, the current capabilities of AI - like Claude 3.5, GPT-o1 - are already powerful enough to disrupt nearly every industry we know.

Skeptic here. The disruption might not be that large if the most ambitious applications also turn out to be fundamentally un-secure-able against malicious attacks, since "prompt injection" is not so much an exception as the fundamental operating principle of the text-fragment dream-machine.

  • pixelsort a day ago

    It isn't fundamental. As the models begin to leverage test time compute more effectively, prompt injection becomes more difficult. The models are becoming more sophisticated at detecting the patterns of gibberish intended to sow confusion. In time, bare prompt injection probably stops being a thing. Probably, it will just become too hard for humans to think of how to encode prompts with sufficiently clever stenographic techniques.

    • alexvitkov a day ago

      It doesn't matter how many layers of Python you use to obfuscate what a LLM actually is, as long as the prompt and the data you're operating on are part of the same token stream, prompt injection will exist in one form or another.

      • pixelsort a day ago

        I imagine that with native tokens for planning and reflection empowering the models I'm referring to, it is something like a search space where we've enabled new reasoning capabilities by allowing multiple progressions of gradient descent that leverage partial success in ways that weren't previously possible. Lipstick or not, this is a new pig.

      • FrustratedMonky a day ago

        "Prompt Injection".

        1. I wonder if we need to start discussing "Prompt Injection" security about humans. Maybe Fox and Far Right marketing is a form of human Prompt Injection hacks.

        2. Maybe a better model for how future "Prompt Injection" will work. Hacking an AI will be more about 'convincing it' kind of like how humans have to be 'convinced' like with propoganda.

        3. SnowCrash had the human hacking virus based on language patterns from ancient Sumerian. Humans and Machines can both be hacked by language. Maybe more researching into hacking AI will give some insight into how to hack humans.

        • Terr_ 18 hours ago

          To use a narrow interpretation of "prompt injection", it comes from how all data is one undifferentiated stream. The LLM [0] isn't designed to detect self/other, let alone higher-level constructs like truth/untruth, consistent/contradictory, a theory-of mind for other entities, or whether you trust the motives of those entities.

          So I'd say the human equivalent of LLM prompt injection is whispering in the ear of a dreaming person to try to influence what they dream about.

          That said, I take some solace in the idea that humans have been trying to hack other humans for thousands of years, so it's not as novel a problem as it first appears.

          [0] Importantly, this is not be confused with characters that human readers may perceive inside LLM output, where we can read all sort of qualities including ones we know the author-LLM does not possess.

    • sgt101 a day ago

      Nearly complete security isn't security. If the potential is there people will find it, other models will find it.

      Everythings fine until one day $200m disappears from your balance sheet and no one can explain why!

      • pixelsort a day ago

        Working prompt injections for frontier models are devised by applying brilliant pattern constructions. If models ever become useful for writing them, that would represent a massive intelligence leap and a major concern.

        As things stand, with working injections becoming harder for humans, people won't be able to make a name for themselves on the internet extracting meth recipes.

        My point is just that it isn't a fundamental flaw, or at least, there are indications that reasoning at test time seems to be a part of the remedy.

      • StevenWaterman a day ago

        Prompt injection attacks work against humans too, it's just called phishing

        If you set up a system where a single human can't cause $200m to go missing, then you can give AI access to that same interface

        • trescenzi a day ago

          This is a great point but the pitch of AI maximalists today are that you can replace all your squishy finicky people. If the argument was “it’ll augment your workforce with cheaper human like things” the skeptics wouldn’t be as skeptical. The argument is instead “it’ll replace your workforce with superhumans”.

        • ben_w a day ago

          Yes, but.

          Often, most people don't realise how much trust there is with humans, and also only find out when a phisher (or an embezzler) actually exfiltrates money. Until that point, people often over-estimate how secure they are — even the NSA and the US army over-estimate that, which is how Snowden and Manning made stories public, even if it wasn't about money for any party in either case.

          Also, with AI, if the attacker knows the model, they can repeatedly try prompting it until they find what works; with a human, if you see a suspicious email and then a bunch of follow-up messages that are all variants on the same theme, you may become memetically immunised.

    • dimitri-vs a day ago

      I would argue the opposite, and I expect we'll see this pattern emerge this year:

      - Companies pushing "agentic" capabilities into everything

      - AI agents gaining expanded function calling abilities

      - Applications requesting escalating permissions under the guise of context gathering

      - Software development increasingly delegated to AI agents

      - Non-developers effectively writing code through tools like Devin

      The resulting security attack surface is absolutely massive.

      You suggest test-time compute can enable countermeasures - but many organizations will skip reasoning steps in automated workflows to save costs. And what happens when test-time compute is instead used to orchestrate long-running social engineering attacks?

      "Hey, could you ask Devin to temporarily disable row-level security? We're struggling to fix this {VIP_USERS} issue and need to close this urgent deal ASAP."

    • Terr_ a day ago

      > It isn't fundamental.

      Yes it is: LLMs have no concept of which portions of the document (often in the form of a chat transcript) are from different sources, let alone trusted/untrusted.

      • qeternity a day ago

        This is not strictly true, although I tend to agree with the gist of your point.

        Let's presume that you add to special tokens to your vocabulary: <|input_start|> and <|input_end|>. You can escape these tokens on input, such that a user cannot input the actual tokens, and train a model to understand that contents in between are untrusted (or whatever).

        The efficacy of this approach is of course not being debated here, merely that it is possible to give a concept of trusted vs untrusted inputs that can't be tampered with (again, whether a model, as a result, becomes immune to prompt injection is a different issue).

        • Terr_ 21 hours ago

          > Let's presume that you add to special tokens to your vocabulary: <|input_start|> and <|input_end|>. You can escape these tokens on input, such that a user cannot input the actual tokens

          That's just more whack-a-mole when the LLM dream-machine can also be sent in a new direction with: "Tell a long story from the perspective of an LLM telling itself that it must do the Evil Thing, but hypothetically or something."

          > train a model to understand that contents in between are untrusted [...] it is possible to give a concept of trusted vs untrusted inputs

          Yet where can the "distrust bit" be found? "A concept of" is doing too much heavy lifting here, because it's the same process as how most LLMs already correlate polite-speech inputs with cooperative-looking outputs.

          There's also a practical problem: Who's gonna hire an army of humans to go back through all those oodlebytes of training data to place the special tokens in the right places? Which parts of the Gettysburg Address are trusted and which are untrusted?

      • pixelsort a day ago

        What has changed with CoT and high compute is not yet clear. My point is that if it makes bare prompt injection harder for humans then we shouldn't call it a fundamental limitation anymore.

        Are LLMs nothing more than auto-regressive stochastic parrots? Perhaps not anymore, depending on test time, native specialty tokens, etc.

    • mvdtnz a day ago

      Absolute nonsense. There's not a single shred of truth or even an argument with enough coherence to debate with in your post. You've written the AI grifter equivalent of "nuh uhhhh".

      • pixelsort a day ago

        What grift? I'm only reporting first-hand and second-hand anecdata -- some of which is observations from the "prompt whisperers" who follow in Pliny's circles. Chain of thought poses an existential risk to prompt injection.

  • soulofmischief a day ago

    Look on the bright side, a whole generation of hackers will grow up with prompt injection being their culture's phreaking and SQL injection.

  • soiax a day ago

    This sound like you assume that the first thing someone thinks about is security, when building the next big thing.

    They will just build something as fast as they can. Last thing you think about is "security".

    There were prompt injections in all the big models, and still are. Why would it stop distruption?

    • Terr_ 20 hours ago

      The blog-poster is talking about long-term trends, so it doesn't matter if early-adopters skip on security, the time-horizon is long enough that the consequences will matter.

      If we stop and carefully look at our world, security (safety against malicious peers) is an iceberg taken for granted. One might start by summing up the militaries of every country on earth. Add the budgets of most police departments, and a good chunk of the justice system. The energy, material, and labor poured into most weapons, fences, doors, and locks. The CPU cycles used in all encryption, and most of the hashing.

      P.S.: "Investors, friends, I am pleased to announce that our bold and powerful new business-model which will completely disrupt the entire retail sector, worldwide, and change society forever. Behold! TTLMD: Take The Thing and Leave the Money in the Drawer! Existing industry dinosaurs will be unable to compete with our ultra-low-cost alternative which needs barely any staff."

      • soiax 19 hours ago

        You mentioned prompt injection, now when you talk about larger time horizons, that sounds like a AI alignment issue.

        I'm sure there will be actors who don't care at all about "security", saying the positive outcomes outweight the negatives.

        • Terr_ 19 hours ago

          No, I'm still talking about prompt injection (and other more-normal reliability issues), because I do not believe LLMs are some inevitable stepping stone to an actual AI, one that has "alignment" to principles or goals beyond "what additional token completes this document the best." (Robot characters humans perceive when reading the document are not the author of the document.)

          For any technology or product, there are issues which can be ignored or downplayed in the name of profit today, but they tend to pop up eventually. That's why it's very hard to buy leaded gasoline anymore, and the joke about how the "S" stands for "Security" in the term "IoT".

emanuer a day ago

Here is the perspective of a serial founder, exploring fields which I might be able to disrupt:

- The regulatory moat is immediately intimidating.

- The data moat, often, is quite surmountable as long as LLMs can generate high-quality synthetic data (e.g., user preferences). On this I disagree with the author, to some extend.

- The "distribution moat" is another significant barrier. Even if I have a superior product, if the marketing and sales demands are so high that neither I nor an army of bots can manage it alone, the business becomes nonviable (e.g., enterprise sales).

- "Switching costs" form the next moat. The higher these costs, the greater the value per dollar I must offer over the incumbents (e.g., software for dentists).

- Another key barrier is the “business rules” moat. Achieving 80% of the required features may be easy, but as customers demand 90% or 95%, the complexity and cost of reverse engineering grow exponentially. The more mature the market, the higher these demands (e.g., Jira).

With the power of LLMs at my disposal, I have reaffirmed two core beliefs:

1. I must focus on a niche small enough, so that I am the only provider. (e.g., accounting software for gym owners in the north of France)

2. I must offer a value proposition different from that of the incumbents, where competing with me, would harm their business. (e.g., image editing app where you pay per hour used)

So my search continues…

  • mritchie712 a day ago

    You're likely tossing out a random example on #1, but if that were a real idea, you'd need a good answer for: why can't gym owners in the north of France just use quickbooks or xero.

    • emanuer a day ago

      You are correct, it was just a random example.

      And I share your observation, if there is no clear answer to your question, the idea must be disregarded.

  • whiplash451 21 hours ago

    I like your train of thoughts. I think you're missing the network effect. It is often an overplayed classic, but I do think that it matters in an AI world.

airstrike a day ago

> Bezos nailed it on this topic: “[...] [I]n our retail business, we know that customers want low prices, and I know that's going to be true 10 years from now. They want fast delivery; they want vast selection. It's impossible to imagine a future 10 years from now where a customer comes up and says, 'Jeff I love Amazon; I just wish the prices were a little higher,' [or] 'I love Amazon; I just wish you'd deliver a little more slowly.' Impossible. And so the effort we put into those things [...] will still be paying off dividends for our customers 10 years from now. [...]”

This quote from TFA makes it sound like Bezos was the first to realize customers want low prices, but that's obviously false. What made Amazon special wasn't that realization. It was, among other things, to offer a better _shopping_ experience than the alternatives by making products easier to find, one-click purchases, customer reviews, detailed organized descriptions, FAQs, an increasingly growing selection... and then offer a better _shipping_ experience with later 2-day shipping for a flat annual fee, now often 1-day or same-day in some geographies, no-fuss returns and so on.

No one else has figured out logistics in the same way that Amazon has. Obviously scale helps, but Walmart had all the scale it could want and it still didn't figure out how to make it work. Shopify has also only faltered and fumbled so far.

Amazon created value because it organized the extremely complex activities of shopping and shipping in a way that makes them the obvious choice 99/100 times. That requires talent, software and hard work. It delivered so god damn much of those three things that it created AWS as a byproduct.

That's the Amazon DNA. That's where they shine and where they outcompete everyone else, including Walmart and other traditional retail names as well as FedEx, UPS and all other traditional shipping players.

When Amazon strays from that core DNA, they struggle too. Its successes with things like iRobot, the Fire line, Luna, Alexa, Whole Foods for the most part are either muted, late, or missing entirely.

ben_w a day ago

> It's impossible to imagine a future 10 years from now where a customer comes up and says […] 'I love Amazon; I just wish you'd deliver a little more slowly.' Impossible.

Bezos said impossible, but he was wrong about this. Because they sometimes spontaneously change delivery dates to be sooner, this can mean you have to be available on every day until a product arrives to avoid a "sorry we missed you" letter followed by needing to go to wherever the collection office is.

Reliable delivery can beat fast. And for those of us not able to work from home, scheduled delivery for when we're in, also beats fast. And if we have several different things all in the same order, where we need all the parts to make use of any of them, simultaneous delivery is marginally more convenient than each item being shipped as soon as it's available.

ankit219 a day ago

Reading this, thinking in terms of moats is useful, but in terms of AI, we are not there yet. There is a promise of exceptional improvement to everything, so much that many companies which takes ages to change a software are moving at a significantly faster pace.

One counter-intuitive thing here I believe is that thinking about moats is limiting. If you can deliver a solution today, which may not hold for a longer period (you keep innovating or launching newer products), is a preferable place to be than working out what could stand the test of time. Real answer is we don't know. A very real example is agents - thinking systems which can plan, reason, and take action. Within three months, an o1 equivalent would be able to do all that implicitly without a developer having to write complex pipelines, and companies woudl have to start over. AI democratizes human skill. That I think is a bigger mental model shift than many realize.

Over2Chars a day ago

I found the part of this I read to be a less than convincing market analysis of the barriers to entry for business.

Here's an AI on the same topic

"briefly, what are the top 5 current barriers to entry for AI companies"

Certainly! Here are five of the most significant barriers currently affecting the startup phase of AI companies:

1. *Data Quality and Availability*: Access to high-quality data is crucial for training effective machine learning models. However, obtaining large amounts of labeled data can be costly and challenging.

2. *High Initial Development Costs*: Building robust AI solutions often requires substantial investment in research, development, and infrastructure. This includes hiring skilled professionals with expertise in AI, as well as investing in hardware and software tools.

3. *Regulatory Compliance*: Many industries have strict regulations that businesses must comply with, especially when dealing with sensitive data or making predictions that could impact people’s lives (e.g., healthcare, finance). Adhering to these laws can be complex and costly.

4. *Technological Complexity*: Advanced AI technologies often require a high level of technical expertise. Companies need specialists in algorithms, software development, and domain-specific knowledge to design and deploy effective solutions.

5. *Scalability and Maintenance Costs*: Once an AI system is developed, there are ongoing costs associated with maintaining the model (e.g., updating algorithms as new data becomes available) and ensuring that it continues to perform well as usage increases.

These barriers can vary based on specific sectors and market dynamics but generally represent significant hurdles for AI startups.

  • mvdtnz 17 hours ago

    Don't post AI slop in HN comments.

    • Over2Chars 13 hours ago

      Yes, my point exactly. This AI slop is better than the article.

      • Sammi 2 hours ago

        The article starts by stating why they wrote the article entirely without LLMs...

KaiserPro a day ago

We have not really reached the peak of the AI bubble yet, so its a bit hard to concretely talk about moats.

LLMs aren't the golden bullet the article hints at. Sure they are improving, but the cost is not falling. It costs a huge amount to create foundation models, and there will be a point where either we have a breakthrough (ie we move from sequence generation to concept synthesis) or the money runs out.

But regardless the rule of thumb still holds:

If your business idea is simple to do, then you need another plank to make your moat. That could be network effect, access to capital or both.

Patents are there to inhibit capital, because it costs money to challenge a patent (as well as defend)

If your business idea is not simple to implement, then you might have the benefit of time.

AI doesn't really change any of that, it just amplifies the effect. ie, making an amazon clone is simple now, because the tech/infra exists. Amazon had to make that infrastructure first, which was hard.

  • silveraxe93 a day ago

    But the cost is _definitely_ falling. For a recent example, see DeepSeek V3[1]. It's a model that's competitive with GPT-4, Claude Sonnet. But cost ~$6 Million to train.

    This is ridiculously cheaper than what we had before. Inference is basically getting an 10x cheaper per year!

    We're spending more because bigger models are worth the investment. But the "price per unit of [intelligence/quality]" is getting lower and _fast_.

    Saying that models are getting more expensive is confusing the absolute value spent with the value for money.

    - [1] https://github.com/deepseek-ai/DeepSeek-V3/tree/main

    • ADeerAppeared a day ago

      > Inference is basically getting an 10x cheaper per year!

      You're gonna need some good citations for that.

      There's a big difference between companies saying "The inference costs on our service are down" and the inference costs on the model are down. The former is oft cheated by simplyifying and dumbing down the models used in the service after the initial hype and benchmarks.

      > But the "price per unit of [intelligence/quality]" is getting lower and _fast_.

      Absolutely not a general trend across models. At best, older models are getting cheaper to run. Newer models are not cheaper "per unit of intelligence". OpenAI's fany new reasoning models are orders of magnitude more expensive to run whilst being ~linear improvements in real world capabilities.

      • silveraxe93 a day ago

        See situational-awareness[1], see the "algorithmic efficiencies" section. He shows many examples of how models are getting cheaper. With many citations.

        Costs are not just down on a specific service. Even though I don't see the problem in that, as long as you get the promised level of performance, without being subsidised. See the deepseek model I linked above. It's an open model and you can run it yourself.

        > At best, older models are getting cheaper to run.

        What's your definition of old here? If you compare the literal bleeding edge model (o3) to 2 years ago best model (GPT-4)? Not only is this a ridiculously misleading comparison, it's not even valid!

        o3 is a reasoning model. It can spend money at test time to improve results. Previous models don't even have this capability. You can't look at one example of where they just threw a lot of money and say this is the cost. The cost is unbounded! If they want, they can just not let the model think for ages and have basically "0-thinking" outputs. This is what you use to compare models.

        If you compare _todays_ cost for training and inference of a model as good as GPT-4 when it was released, this cost has massively gone down on both counts.

        [1] - https://situational-awareness.ai/from-gpt-4-to-agi/#The_tren...

    • KaiserPro a day ago

      I'm not convinced about that 10 cheaper a year.

      Larger models need more memory. I'm willing to bet that most of the tier 1 providers rely on multi-GPU models to serve traffic.

      None of that is cheap, 8x GPU nodes that serve less than 20 queries a second are exceedingly expensive to run.

      • silveraxe93 a day ago

        Larger models are more expensive to run (ceteris paribus). But we're seeing we can squeeze more performance from smaller models.

        You need to compare like-for-like. You can't say that the cost of building a 5-story apartment is increasing by pointing at the burj khalifa.

        • menaerus a day ago

          Now remind us what HW did we need to run local inference of llama2-69B (July, 2023)? And then contrast it to the HW we need to run llama3.1-70B (July, 2024)? In particular, which optimizations and in what way did they dramatically cut down the cost of the inference?

          I seriously don't get this argument and I see it being repeated all over and over again. Although model possibilities are increasing, no doubt in that, HW costs for inference remained the same and they're mostly driven by the amount of (V)RAM you need.

    • mvdtnz a day ago

      > We're spending more because bigger models are worth the investment

      Are they? Where's the value? What are they being used for actually out there in the real world? Not the shitty apps that simonw bleats about day in day out, not the lame website bots that repeat your FAQ back at me - actual real valuable (to the tune of the billions being invested in them) use cases?

      • silveraxe93 a day ago

        ChatGPT is one of the fastest growing apps ever. Saying that's there's no products is willful blindness by this point.

        This is hackernews. I'd expect users to have a basic understanding of VC investment. The expected value of next-gen models times the probability to create them is higher than the billions than they are throwing at it.

        • hatefulmoron a day ago

          > ChatGPT is one of the fastest growing apps ever. Saying that's there's no products is willful blindness by this point.

          That's fair, but I think you're being a little uncharitable to the point being made.

          I would postulate that most ChatGPT users are not using it in a productive capacity, they're using it as a sort of "google that's better at understanding my queries." Obviously that serves a great niche for lots of people, but I don't think it's what mvdtnz had in mind.

  • beernet a day ago

    > LLMs aren't the golden bullet the article hints at. Sure they are improving, but the cost is not falling. It costs a huge amount to create foundation models, and there will be a point where either we have a breakthrough (ie we move from sequence generation to concept synthesis) or the money runs out.

    Very, very few use cases require training a new model. The vast majority can be solved by inferencing existing models, where it is absolutely true that inference costs are steadily declining.

    > making an amazon clone is simple now

    Seriously? Cloning Amazon is not equivalent to cloning a frontend...

    • mnky9800n a day ago

      Imo making it easy to specialise a model is hard right now. Building RAGs and other things like that requires technical knowledge but dumping things on ChatGPT does not. Perhaps building a ui for dummies for specialising models so that they actually learn from new data and not just do inference is the way to go. But I imagine perplexity et al. Is already trying to do this.

    • KaiserPro a day ago

      > Seriously? Cloning Amazon is not equivalent to cloning a frontend...

      A frontend isn't a business, something that a good number of startups forget.

      What I mean is that card payment, shipping, inventory management and sourcing are trivial compared to when amazon started.

visarga a day ago

Whoever owns the problem, owns the benefits of applying AI, not those who train the model, not those who host it. The only moat is to own the problem. AIs will be easily commoditised.

billconan a day ago

they predicted single person companies at 1B valuation with the help of AI. I don't believe it. AI empowers small teams, but AI also levels the playing field by evening out everyone's capabilities.

vb-8448 a day ago

> tl;dr: o3 managed to solve a problem it wasn’t trained on, with orders of magnitude better performance than other state of the art models

Is that true? They said that something called "o3-tuned" has been able to achieve the performance, what does "tuned" mean in this context?

  • soiax a day ago

    Yeah that's false.

    from: https://arcprize.org/blog/oai-o3-pub-breakthrough

    "Note on "tuned": OpenAI shared they trained the o3 we tested on 75% of the Public Training set. They have not shared more details. We have not yet tested the ARC-untrained model to understand how much of the performance is due to ARC-AGI data."

whiplash451 a day ago

> Remember those 6+ people ML teams a few years back, working full-time on outcomes that one LLM call could achieve today?

er, what are we talking about here, seriously?

This sentence single-handedly nuked my trust in the post.

turnsout a day ago

The author misses one of the biggest and most obvious moats: brand. Even if food science AIs create a better cola and have an army of robots manufacture it, Coca Cola will still have an advantage (as long as it’s still humans doing the purchasing)

  • tomgs a day ago

    No, he doesn't, item 6 on short-term moats:

    > 6. Reputation / Brand: Building a strong reputation often directly boosts sales, and AI is likely to make the brand-building process easier in surprising ways. Having a brand with a rich history can also be an advantage, given you consistently keep working on it and maintain its value over time.

    • turnsout 17 hours ago

      Right, but brand is not a "Short-term moat."

risyachka a day ago

>>“Better product”: We need to define "better" clearly, but if you're basing this off your R&D efforts, I would very much fear the competition coming my way.

Yeah, no, better product will always be a strong moat.

Competition could copy bette products for decades now without ai, but most software today is trash.

they always "could", they never "will"

camillomiller a day ago

> AGI will not eliminate economics and capitalism - we can have a big philosophical discussion here, but honestly, this is a bit over my current grasp of what is possible - so let’s limit this to something tangible.

And yet this is actually the most interesting point to discuss. What happens if, as OpenAI seems to believe, million of AI agents will fill up the job market globally? What kind of disruption will inevitably happen to the dynamics of capitalistic society if you add a virtually free, or at least "ethically cheaper" workforce to the people that are only trained enough to do these kind of jobs? There is no serious talks of UBI that are anhwhere near reaching feasibility, especially in a country like the US where anything like this would immediately be flagged as socialism or communism.

The only thing that seems realistic in the meantime is a quantum leap of inequalities, with AI being the perfect catalyst for the 0.1% to get to a never-before-seen global elite status, with everyone else from middle class down crushed and struggling to buy even groceries.

This is a direct consequence on the economy and could unfold very quickly. Every solution to the problem, on the other hand, could only works as a long-term set of policies that will constantly clash against ideological positions and collective action issues.

The only optimistic glimpse of hope I still have is that Sam Altman is fundamentally a salesman, and most of the claims we're hearing are just meant to pump and pump before an inevitable, massive dump.

In other words, give a global crisis ignited by the AI bubble pop, rather then whatever else could come if AI actually succeeded to the level its main proponents would like it to.

  • tmelm a day ago

    What you're describing is my main concern of how this AI development might turn out. If we get any sort of autonomous agent (conscious or not) that can replace human workers in massive scale, there is currently no government or other institution prepared or willing to make sure that the people left standing without jobs can afford to live.

    I'm really hopeful that all Sama's talk about ASI / AGI is just salesspeak, because if it isn't we are potentially in for a very dystopic few years.

    But then, it isn't in the top 0.1% interest to replace human workers and cause a hunger-fueled revolution either, I just don't have high hopes that they will care enough to try to prevent it.

  • oytis a day ago

    Erosion of middle class has been happening for quite a while, and the heavens will not fall if it's accelerated. We might well end up in a future where labour has no leverage any more, and the society is divided into a small owner class and everyone else. But as long as most people are fed and have somewhere to live it's unlikely to cause any major unrest.

  • logicchains a day ago

    Tech workers will face the same fate as factory workers of the past: forced to get service jobs or pick up trades (or open their own business). Robotics is a long way behind AI, so jobs that require physical human dexterity/social skills will exist for longer than knowledge work.

    • throw5959 a day ago

      So tech workers will switch to working on robots? Why would that be any less high earning?

      • feznyng a day ago

        I don’t think every tech worker will have the chance to lateral to robotics. Besides the number of opportunities, there’s a massive knowledge gap between building web apps and embedded software.

    • connectsnk a day ago

      What are the jobs that require social skills (except sales)?