quelup 3 days ago

Strobelight is a lifesaver. Especially with high qps services - makes it much easier to see where it's worth spending time trying to optimize.

maknee 3 days ago

"All of this is made possible with the inclusion of frame pointers in all of Meta’s user space binaries, otherwise we couldn’t walk the stack to get all these addresses (or we’d have to do some other complicated/expensive thing which wouldn’t be as efficient)"

This makes things so, so, so much easier. Otherwise, a lot of effort has to built into creating an unwinder in ebpf code, essentially porting .eh_frame cfa/ra/bp calculations.

They claim to have event profilers for non-native languages (e.g. python). Does this mean that they use something similar to https://github.com/benfred/py-spy ? Otherwise, it's not obvious to me how they can read python state.

Lastly, the github repo https://github.com/facebookincubator/strobelight is pretty barebones. Wonder when they'll update it

  • brancz 3 days ago

    Already been done:

    1) native unwinding: https://www.polarsignals.com/blog/posts/2022/11/29/dwarf-bas...

    2) python: https://www.polarsignals.com/blog/posts/2023/10/04/profiling...

    Both available as part of the Parca open source project.

    https://www.parca.dev/

    (Disclaimer I work on Parca and am the founder of Polar Signals)

    • maknee 3 days ago

      Thanks! Those blogs are incredibly useful. Nice work on the profiler. :)

      I have multiple questions if you don’t mind answering them:

      Is there significant overhead to native unwinding and python in ebpf? EBPF needs to constantly read & copy from user space to read data structures.

      I ask this because unwinding with frame pointers can be done by reading without copying in userland.

      Python can be ran with different engines (cpython, pypy, etc) and versions (3.7, 3.8,…) and compilers can reorganize offsets. Reading from offsets in seems me to be handwavy. Does this work well in practice/when did it fail?

      • brancz 3 days ago

        Thank you!

        Overhead ultimately depends on the frequency, it defaults to 19hz per core, at which it’s less than 1%, which is tried and tested with all sorts of super heavy python, JVM, rust, etc. workloads. Since it’s per core it tends to be plenty of stacks to build statistical significance quickly. The profiler is essentially a thread-per-core model, which certainly helps for perf.

        The offset approach has evolved a bit, it’s mixed with some disassembling today, with that combination it’s rock solid. It is dependent on the engine, and in the case of python only support cpython today.

    • tdullien 3 days ago

      Short note: Also available as the standard Otel profiling agent ;)

suralind 2 days ago

That's really cool. I only wish open source projects were this integrated. (Imagine if making a PR would estimate your AWS cost increase after running canary Kubernetes.)

Also what's really cool to see is that Facebook's internal UI actually looks decent. Never work in a company of anywhere close to that size and the tooling always look like it was puked by a dog.

samstave 3 days ago

DOPE

Fractal compute expense modeelling is hard.

One may do well in applying fluid dynamics (such that we cannot maintain in head)

into compute requirements, it will be funny once we realize that everything i mico (pico) fluid dynamics in general

arnath 3 days ago

This is really cool! I've always thought that one thing preventing major competitors to AWS/Azure/GCP is the lack of easy-to-use tooling for machine level monitoring like this. When I was at Microsoft, we built a tool like this that used Windows Firewall filters to track all the network traffic between our services and it was incredibly useful for debugging.

That said, as with anything from Meta, I approach this with a grain of salt and the fact that I can't tell what they stand to gain from this makes me suspicious.

  • theptip 3 days ago

    > the fact that I can't tell what they stand to gain from this makes me suspicious.

    Meta is one of the biggest contributors to FOSS in the world. (React, PyTorch, Llama, …). They stand to gain what every big company does, a community contributing to their infra.

    You’ll note that nobody is open sourcing their ad recommender, that is the one you should be skeptical about if you ever see. You don’t share your secret sauce.

  • mhlakhani 3 days ago

    As a sibling commenter said, it helps brand and recruiting - which meta cares about

    • bigtimesink 3 days ago

      Maybe, but the gold chain, million dollar watch wearing CEO talking about masculine energy doesn't help the brand.

      • jay-barronville 3 days ago

        > Maybe, but the gold chain, million dollar watch wearing CEO talking about masculine energy doesn't help the brand.

        Why not exactly? Between Meta’s great contributions to the open-source ecosystem and Mark behaving more like a normal man nowadays, right now is the only time in a long time that I’ve considered applying to go work at Meta. I’ve heard several of my colleagues and friends say the same thing in recent months.

        • quesera 3 days ago

          Imagining that there's anything "normal" about that knucklehead is why "masculinity" is such an easy target for parody.

          • martinsnow 3 days ago

            What's unattractive about how do you do fellow humans?

          • jay-barronville 3 days ago

            > Imagining that there's anything "normal" about that knucklehead is why "masculinity" is such an easy target for parody.

            You’re certainly entitled to your opinions and ad hominems. Many folks, including myself, disagree with you, so there’s that.

            • quesera 3 days ago

              Yep, and you yours of course.

              But man is that dude a bad example of how to be a human.

              I'll cut him some slack for growing up in public with stupid money and no one to regulate his impulses, but uff da.

              Wake me up when he's old enough for his lagging prefrontal cortex to catch up with the rest of him.

saganus 3 days ago

Ah, this is performance profiling.

Seeing the title and the domain I thought this was user profiling and I was wondering why would Meta be publishing this.

  • hunter2_ 3 days ago

    > the domain

    Perhaps a contributing factor is how HN shows only the final non-eTLD [0] label of the domain. If it showed all labels, you'd have seen "engineering.fb.com" which, while not a dead giveaway, implies that the problem space is technical.

    It would be nice if this aggressive truncation were applied only above a certain threshold of length.

    [0] https://en.wikipedia.org/wiki/Public_Suffix_List

    • teddyh 2 days ago

      I suggested this 10 years ago. <https://news.ycombinator.com/item?id=8911044>

      • hunter2_ 2 days ago

        We are actually saying different things, and your point highlights an error in mine (i.e., I assumed they show the eTLD from the PSL plus one extra label, but apparently they have their own shadow PSL which omits things like pp.se and therefore occasionally shows nothing but an eTLD?) but either way we agree that showing more would be better.

brancz 3 days ago

We’re working hard to bring a lot of Strobelight to everyone through Parca[0] as OSS and Polar Signals[1] as the commercial version. Some parts already exists much to come this year! :)

[0] https://www.parca.dev/

[1] https://www.polarsignals.com/

(Disclaimer: founder of polar signals)

Starlord2048 3 days ago

Between LLVM's optimization passes, static analysis, and modern LLM-powered tools, couldn't we build systems that not only identify but automatically fix these performance issues? GitHub Copilot already suggests code - why not have "Copilot Performance" that refactors inefficient patterns?

I'm curious if anyone is working on "self-healing" systems where the optimization feedback loop is closed automatically rather than requiring human engineers to parse complex profiling data.

iampims 2 days ago

I just wish Meta would open source Scuba.

varunneal 3 days ago

Cool anecdote from inside article

> A seasoned performance engineer was looking through Strobelight data and discovered that by filtering on a particular std::vector function call (using the symbolized file and line number) he could identify computationally expensive array copies that happen unintentionally with the ‘auto’ keyword in C++.

> The engineer turned a few knobs, adjusted his Scuba query, and happened to notice one of these copies in a particularly hot call path in one of Meta’s largest ads services. He then cracked open his code editor to investigate whether this particular vector copy was intentional… it wasn’t.

> It was a simple mistake that any engineer working in C++ has made a hundred times.

> So, the engineer typed an “&” after the auto keyword to indicate we want a reference instead of a copy. It was a one-character commit, which, after it was shipped to production, equated to an estimated 15,000 servers in capacity savings per year!

  • mhlakhani 3 days ago

    That one diff blew my mind when I saw it. It’s a prime example of that story about “you paid me a lot of money to know where to fix that pipe”

  • JoshTriplett 3 days ago

    It's a cool anecdote. It's also a case study in heavyweight copies being something that shouldn't happen by default, and should require explicit annotation indicating that the engineer expects a heavyweight copy of the entire structure.

    • mhlakhani 3 days ago

      I don’t know if that would have helped here, if memory serves me right:

      1. The copy was needed initially 2. This structure wasn’t as heavy back then

      … over time the code evolved so it became heavy and the copy became unnecessary. That’s harder to find without profiling to guide things

    • ehsankia 3 days ago

      If it's safety/correctness versus performance, I think the default should be the former. Copying, while inefficient is generally more correct and avoids hard-to-debug errors. It's the whole discussion about premature optimization. I'd rather make a copy than make sure the array is not mutated anywhere ever.

      • ltbarcly3 3 days ago

        Yes, everyone agrees with you. The claim you responded to was that you should have to be explicit, because it is very easy to unintentionally copy. For example, it is easy to copy when there is never more than one live pointer to a datastructure. It's easy to copy when you allocate a resource in a function and return it, which makes the original an orphan which is then immediately freed. It's extremely easy to make a mistake which prevents move from working and you have to go back and carefully check if you want to be sure. It should be trivial to just say "move this" and if something isn't right it's an error at compile time, rather than just falling back to silently being wasteful.

      • umanwizard 3 days ago

        This exact problem is basically why Rust exists.

      • JoshTriplett 3 days ago

        I'm not saying it should silently alias any more than it should silently copy. It should give an error, and require the developer to explicitly copy or explicitly alias.

  • howlallday 3 days ago

    [flagged]

    • vosper 3 days ago

      Tired vote-bait quote.

      • Bjartr 3 days ago

        Only because the Overton window has shifted enough to normalize it.

      • howlallday 3 days ago

        Imagine how much server capacity we could save if we didn't waste the equivalent electrical consumption of Belgium convincing your mother she needs more garbage from Temu.

        • phyrex 2 days ago

          And then how would we pay for that server capacity?

ydjje 3 days ago

[flagged]