Is Rust faster than C?

(steveklabnik.com)

164 points | by vincentchau 3 days ago

39 comments

  • pornel 1 hour ago
    In short, the maximum possible speed is the same (+/- some nitpicks), but there can be significant differences in typical code, and it's hard to define what's a realistic typical example.

    The big one is multi-threading. In Rust, whether you use threads or not, all globals must be thread-safe, and the borrow checker requires memory access to be shared XOR mutable. When writing single-threaded code takes 90% of effort of writing multi-threaded one, Rust programmers may as well sprinkle threads all over the place regardless whether that's a 16x improvement or 1.5x improvement. In C, the cost/benefit analysis is different. Even just spawning a thread is going to make somebody complain that they can't build the code on their platform due to C11/pthread/openmp. Risk of having to debug heisenbugs means that code typically won't be made multi-threaded unless really necessary, and even then preferably kept to simple cases or very coarse-grained splits.

    • arghwhat 57 minutes ago
      To be honest, I think a lot of the justification here is just a difference in standard library and ease of use.

      I wouldn't consider there to be any notable effort in making thread build on target platforms in C relative to normal effort levels in C, but it's objectively more work than `std::thread::spawn(move || { ... });`.

      Despite benefits, I don't actually think the memory safety really plays a role in the usage rate of parallelism. Case in point, Go has no implicit memory safety with both races and atomicity issues being easy to make, and yet relies much heavier on concurrency (with a parallelism degree managed by the runtime) with much less consideration than Rust. After all, `go f()` is even easier.

      (As a personal anecdote, I've probably run into more concurrency-related heisenbugs in Go than I ever did in C, with C heisenbugs more commonly being memory mismanagement in single-threaded code with complex object lifetimes/ownership structures...)

    • jasonjmcghee 6 minutes ago
      > Rust programmers may as well sprinkle threads all over the place regardless whether that's a 16x improvement or 1.5x improvement

      What about energy use and contention?

    • OptionOfT 1 hour ago
      Apart from multi threading, there is more information in the Rust type system. Would that would allow more optimizations?
      • kouteiheika 1 hour ago
        Yes. All `&mut` references in Rust are equivalent to C's `restrict` qualified pointers. In the past I measured a ~15% real world performance improvement in one of my projects due to this (rustc has/had a flag where you can turn this on/off; it was disabled by default for quite some time due to codegen bugs in LLVM).
        • steveklabnik 1 hour ago
          Not just all &mut T, but also all &T, where the T does not transitively contain an UnsafeCell<T>. Click "show llvm ir" instead of "build" here: https://play.rust-lang.org/?version=stable&mode=release&edit...
          • marcianx 54 minutes ago
            I was confused by this at first since `&T` clearly allows aliasing (which is what C's `restrict` is about). But I realize that Steve meant just the optimization opportunity: you can be guaranteed that (in the absence of UB), the data behind the `&T` can be known to not change in the absence of a contained `UnsafeCell<T>`, so you don't have to reload it after mutations through other pointers.
            • steveklabnik 35 minutes ago
              Yes. It's a bit tricky to think about, because while it is literally called 'noalias', what it actually means is more subtle. I already linked to a version of the C spec below, https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3220.pdf but if anyone is curious, this part is in "6.7.4.2 Formal definition of restrict" on page 122.

              In some ways, this is kind of the core observation of Rust: "shared xor mutable". Aliasing is only an issue if the aliasing leads to mutability. You can frame it in terms of aliasing if you have to assume all aliases can mutate, but if they can't, then that changes things.

      • mhh__ 1 hour ago
        Aliasing info is gold dust to a compiler in various situations although the absence of it in the past can mean that they start smoking crack when it's provided.
      • adgjlsfhk1 1 hour ago
        Yes. Specifically since Rust's design prevents shared mutablity, if you have 2 mutable data-structures you know that they don't alias which makes auto vectorization a whole lot easier.
      • tcfhgj 51 minutes ago
        what about generics (equivalent to templates in C++), which allow compile time optimizations all the way down which may not possible if the implementation is hidden behind a void*?
    • gpderetta 36 minutes ago
      Then again, often

        #pragma omp for 
      
      is a very low mental-overhead way to speed up code.
      • MeetingsBrowser 23 minutes ago
        Depends on the code.

        OpenMP does nothing to prevent data races, and anything beyond simple for loops quickly becomes difficult to reason about.

      • nurettin 21 minutes ago
        Yes! gcc/omp in general solved a lot of the problems which are conveniently left out in the article.

        The we have the anecdotal "They failed firefox layout in C++ twice then did it in Rust" < to this I sigh in chrome.

        • steveklabnik 15 minutes ago
          The Rust version of this is "turn .iter() into .par_iter()."

          It's also true that for both, it's not always as easy as "just make the for loop parallel." Stylo is significantly more complex than that.

          > to this I sigh in chrome.

          I'm actually a Chrome user. Does Chrome do what Stylo does? I didn't think it did, but I also haven't really paid attention to the internals of any browsers in the last few years.

    • m-schuetz 1 hour ago
      I'm still confused as to why linux requires linking against TBB for multithreading, thus breaking cmake configs without if(linux) for tbb. That stuff should be included by default without any effort by the developer.
      • sebtron 52 minutes ago
        I think this is related to the C++ standard library implementation.

        Using pthread in C, for example, TBB is not required.

        Not sure about C11 threads, but I have always thought that GLIBC just uses pthread under the hood.

        • m-schuetz 26 minutes ago
          I don't know the details since I'm mainly a windows dev, but when porting to linux, TBB has always been a huge pain in the ass since it's a suddenly additionally required dependency by gcc. Using C++ and std::thread.
    • groundzeros2015 59 minutes ago
      Multithreading does not make code more efficient. It still takes the same amount of work and power (slightly more).

      On a backend system where you already have multiple processes using various cores (databases, web servers, etc) it usually doesn’t make sense as a performance tool.

      And on an embedded device you want to save power so it also rarely makes sense.

      • pirocks 20 minutes ago
        > Multithreading does not make code more efficient. It still takes the same amount of work and power (slightly more).

        In addition to my sibling comments I would like to point out that multithreading quite often can save power. Typically the power consumption of an all core load is within 2x the power consumption of a single core load, while being many times faster assuming your task parallelizes well. This makes sense b/c a fully loaded cpu core still needs all the L3 cache mechanisms, all the DRAM controller mechanisms, etc to run at full speed. A fully idle system on the other hand can consume very little power if it idles well(which admittedly many cpus do not idle on low power).

        Edit:

        I would also add that if your system is running a single threaded database, and a single threaded web server, that still leaves over a hundred of underutilized cores on many modern server class cpus.

      • NetMageSCW 54 minutes ago
        Multithreading can made an application more responsive and more performant to the end user. If multithreading causes an end user to have to wait less, the code is more performant.
        • groundzeros2015 53 minutes ago
          Yes it can used to reduce latency of a particular task. Did you read my points about when it’s not helpful?

          Are people making user facing apps in rust with GUIs?

  • shmolyneaux 1 minute ago
    While people can nitpick, the article is pretty clear that there isn't a single answer. Everything depends on how you constrain the problem. How much experience does the developer have? What time constraints are there? Is it idiomatic code? How maintainable is the code? You can write C with Rust-like safety checks or Rust with C-like unsafety.

    When you can directly write assembly with either, comparing performance requires having some constraints.

    For what it's worth, I think coding agents could provide a reasonable approximation of what "average" code looks like for a given language. If we benchmark that we'd have some indication of what the typical performance looks like for a given language.

  • OskarS 1 hour ago
    I think personally the answer is "basically no", Rust, C and C++ are all the same kind of low-level languages with the same kind of compiler backends and optimizations, any performance thing you could do in one you can basically do in the other two.

    However, in the spirit of the question: someone mentioned the stricter aliasing rules, that one does come to mind on Rust's side over C/C++. On the other hand, signed integer overflow being UB would count for C/C++ (in general: all the UB in C/C++ not present in Rust is there for performance reasons).

    Another thing I thought of in Rust and C++s favor is generics. For instance, in C, qsort() takes a function pointer for the comparison function, in Rust and C++, the standard library sorting functions are templated on the comparison function. This means it's much easier for the compiler to specialize the sorting function, inline the comparisons and optimize around it. I don't know if C compilers specialize qsort() based on comparison function this way. They might, but it's certainly a lot more to ask of the compiler, and I would argue there are probably many cases like this where C++ and Rust can outperform C because of their much more powerful facilities for specialization.

    • jandrewrogers 43 minutes ago
      The main performance difference between Rust, C, and C++ is the level of effort required to achieve it. Differences in level of effort between these languages will vary with both the type of code and the context.

      It is an argument about economics. I can write C that is as fast as C++. This requires many times more code that takes longer to write and longer to debug. While the results may be the same, I get far better performance from C++ per unit cost. Budgets of time and money ultimately determine the relative performance of software that actually ships, not the choice of language per se.

      I've done parallel C++ and Rust implementations of code. At least for the kind of performance-engineered software I write, the "unit cost of performance" in Rust is much better than C but still worse than C++. These relative costs depend on the kind of software you write.

      • gf000 26 minutes ago
        > I can write C that is as fast as C++

        I generally agree with your take, but I don't think C is in the same league as Rust or C++. C has absolutely terrible expressivity, you can't even have proper generic data structures. And something like small string optimization that is in standard C++ is basically impossible in C - it's not an effort question, it's a question of "are you even writing code, or assembly".

    • toodlemcnoodle 1 hour ago
      I agree with this whole-heartedly. Rust is a LANGUAGE and C is a LANGUAGE. They are used to describe behaviours. When you COMPILE and then RUN them you can measure speed, but that's dependent on two additional bits that are not intrinsically part of the languages themselves.

      Now: the languages may expose patterns that a compiler can make use of to improve optimizations. That IS interesting, but it is not a question of speed. It is a question of expressability.

      • pessimizer 1 hour ago
        No. As you've made clear, it's a question of being able to express things in a way that gives more information to a compiler, allowing it to create executables that run faster.

        Saying that a language is about "expressability" is obvious. A language is nothing other than a form of expression; no more, no less.

        • toodlemcnoodle 55 minutes ago
          Yes. But the speed is dependent on whether or not the compiler makes use of that information and the machine architecture the compiler is running it on.

          Speed is a function of all three -- not just the language.

          Optimizations for one architecture can lead to perverse behaviours on another (think cache misses and memory layout -- even PROGRAM layout can affect speed).

          These things are out of scope of the language and as engineers I think we ought to aim to be a bit more precise. At a coarse level I can understand and even would agree with something like "Python is slower than C", but the same argument applies there as well.

          But at some point objectivity ought to enter the playing field.

        • irishcoffee 43 minutes ago
          > ... it's a question of being able to express things in a way that gives more information to a compiler, allowing it to create executables that run faster.

          There is expressing idea via code, and there is optimization of code. They are different. Writing what one may think is "fully optimized code" the first time is a mistake, every time, and usually not possible for a codebase of any significant size unless you're a one-in-a-billion savant.

          Programming languages, like all languages, are expressive, but only as expressive as the author wants to be, or knows how to be. Rarely does one write code and think "if I'm not expressive enough in a way the compiler understands, my code might be slightly slower! Can't have that!"

          No, people write code that they think is correct, compile it, and run it. If your goal is to make the most perfect code you possibly can, instead of the 95% solution is the robust, reliable, maintainable, and testable, you're doing it wrong.

          Rust is starting to take up the same mental headspace as LLMs: they're both neat tools. That's it. I don't even mind people being excited about neat tools, because they're neat. The blinders about LLMs/Rust being silver bullets for the software industry need to go. They're just tools.

    • Measter 1 hour ago
      > On the other hand, signed integer overflow being UB would count for C/C++

      C and C++ don't actually have an advantage here because this is only limited to signed integers unless you use compiler-specific intrinsics. Rust's standard library allows you to make overflow on any specific arithmetic operation UB on both signed and unsigned integers.

      • OskarS 1 hour ago
        It's interesting, because it's a "cultural" thing like the author discusses, it's a very good point. Sure, you can do unsafe integer arithmetic in Rust. And you can do safe integer arithmetic with overflow in C/C++. But in both cases, do you? Probably you don't in either case.

        "Culturally", C/C++ has opted for "unsafe-but-high-perf" everywhere, and Rust has "safe-but-slightly-lower-perf" everywhere, and you have to go out of your way to do it differently. Similarly with Zig and memory allocators: sure, you can do "dynamically dispatched stateful allocators that you pass to every function that allocates" in C, but do you? No, you probably don't, you probably just use malloc().

        On the other hand: the author's point that the "culture of safety" and the borrow checker in Rust frees your hand to try some things in Rust which you might not in C/C++, and that leads to higher perf. I think that's very true in many cases.

        Again, the answer is more or less "basically no, all these languages are as fast as each other", but the interesting nuance is in what is natural to do as an experienced programmer in them.

        • Xirdus 57 minutes ago
          C++ isn't always "unsafe-but-high-perf". Move semantics are a good example. The spec goes to great lengths to ensure safety in a huge number of scenarios, at the cost of performance. Mostly shows up in two ways: one, unnecessary destructor calls on moved out objects, and two, allowing throwing exceptions in move constructors which prevents most optimizations that would be enabled by having move constructors in the first place (there was an article here recently on this topic).

          Another one is std::shared_ptr. It always uses atomic operations for reference counting and there's no way to disable that behavior or any alternative to use when you don't need thread safety. On the other hand, Rust has both non-atomic Rc and atomic Arc.

    • dana321 1 hour ago
      Rust has linker optimizations that can make it faster in some cases
    • renox 1 hour ago
      >signed integer overflow being UB would count for C/C++

      Then, I raise you to Zig which has unsigned integer overflow being UB.

      • steveklabnik 1 hour ago
        Interestingly enough, Zig does not use the same terminology as C/C++/Rust do here. Zig has "illegal behavior," which is either "safety checked" or "unchecked." Unchecked illegal behavior is like undefined behavior. Compiler flags and in-source annotations can change the semantics from checked to unchecked or vice versa.

        Anyway that's a long way of saying that you're right, integer overflow is illegal behavior, I just think it's interesting.

      • ladyanita22 1 hour ago
        Rust has UB overflow as well, just unsafe.

        https://doc.rust-lang.org/std/intrinsics/fn.unchecked_add.ht...

    • foldr 1 hour ago
      >in Rust and C++, the standard library sorting functions are templated on the comparison function. This means it's much easier for the compiler to specialize the sorting function, inline the comparisons and optimize around it.

      I think this is something of a myth. Typically, a C compiler can't inline the comparison function passed to qsort because libc is dynamically linked (so the code for qsort isn't available). But if you statically link libc and have LTO, or if you just paste the implementation of qsort into your module, then a compiler can inline qsort's comparison function just as easily as a C++ compiler can inline the comparator passed to std::sort. As for type-specific optimizations, these can generally be done just as well for a (void *) that's been cast to a T as they can be for a T (though you do miss out on the possibility of passing by value).

      That said, I think there is an indirect connection between a templated sort function and the ability to inline: it forces a compiler/linker architecture where the source code of the sort function is available to the compiler when it's generating code for calls to that function.

      • OskarS 51 minutes ago
        qsort is obviously just an example, this situation applies to anything that takes a callback: in C++/Rust, that's almost always generic and the compiler will monomorphize the function and optimize around it, and in C it's almost always a function pointer and a userData argument for state passed on the stack. (and, of course, it applies not just to callbacks, but more broadly to anything templated).

        I'm actually very curious about how good C compilers are at specializing situations like this, I don't actually know. In the vast majority cases, the C compiler will not have access to the code (either because of dynamic linking like in this example, or because the definition is in another translation unit), but what if it does? Either with static linking and LTO, or because the function is marked "inline" in a header? Will C compilers specialize as aggressively as Rust and C++ are forced to do?

        If anyone has any resources that have looked into this, I would be curious to hear about it.

  • bfrog 2 hours ago
    One example where Rust enables better and faster abstractions is traits. C you can do this with some ugly methods like macros and such but in Rust it’s not the implementers choice it’s the callers choice whether to use dynamic dispatch (function pointer table in C) or static dispatch (direct function calls!)

    In c the caller isn’t choosing typically. The author of some library or api decides this for you.

    This turns out to be fairly significant in something like an embedded context where function pointers kill icache and rob cycles jumping through hoops. Say you want to bit bang a bus protocol using GPIO, in C with function pointers this adds maybe non trivial overhead and your abstraction is no longer (never was) free. Traits let the caller decide to monomorphize that code and get effectively register reads and writes inlined while still having an abstract interface to GPIO. This is excellent!

    • emidln 2 hours ago
      I probably enjoy ELF hacking more than most, but patching an ELF binary via LD_PRELOAD, linker hacks, or even manual or assisted relinking tricks are just tools in the bag of performant C/C++ (and probably Rust too, but I don't get paid to make that fast). If you care about perf and for whatever reason are using someone else's code, you should be intimately familiar with your linker, binary format, ABI, and OS in addition to your hardware. It's all bytes in the end, and these abstractions are pliable with standard tooling.

      I'd usually rather have a nice language-level interface for customizing implementation, but ELF and Linux scripting is typically good enough. Binary patching is in a much easier to use place these days with good free tooling and plenty of (admittedly exploit-oriented) tutorials to extrapolate from as examples.

    • K0nserv 2 hours ago
      > In c the callers isn’t choosing typically. The author of some library or api decides this for you.

      Tbf this applies to Rust too. If the author writes

         fn foo(bar: Box<dyn BarTrait>)
      
      they have forced the caller into dynamic dispatch.

      Had they written

         fn foo(bar: impl BarTrait)
      
      the choice would've remained open to the caller
      • nicoburns 2 hours ago
        Right, but almost all APIs in Rust use something like

            fn foo(bar: impl BarTrait)
        
        and AFAIK it isn't possible to write that in C (though C++ does allow this kind of thing).
        • bsaul 0 minutes ago
          how do apis typically manage to actually « use » the « bar » of your example, such as storing it somewhere, without enforcing some kind of constraints ?
        • bfrog 8 minutes ago
          C++ you either use templates or classes and virtuals. In either case the caller doesn't get to decide.
    • embedding-shape 2 hours ago
      It's a tradeoff though, as I think traits makes the Rust build times grow really quickly. I don't know the exact characteristics of it, also I think they speed it up compared to how it used to be, but I do remember that you'll get noticeable build slowdowns the more you use traits, especially "complicated" ones.
      • treyd 2 hours ago
        Code is typically run many more times than it's compiled, so this is a perfectly good tradeoff to make.
        • embedding-shape 1 hour ago
          Absolutely, was not trying to claim otherwise. But since we're engineers (at least I like to see myself as one), it's worth always keeping in mind that almost everything comes with tradeoffs, even traits :)

          Someone down the line might be wondering why suddenly their Rust builds take 4x the time after merging something, and just maybe remembering this offhand comment will make them find the issue faster :)

        • cardanome 2 hours ago
          For release builds yes. For debug builds slow compile times kill productivity.
          • torginus 54 minutes ago
            A lot of C++ devs advocate for simple replacements for the STL that do not rely too much on zero-cost abstractions. That way you can have small binaries, fast compiles, and make a fast-debug kinda build where you only turn on a few optimizations.

            That way you can get most of the speed of the Release version, with a fairly good chance of getting usable debug info.

            A huge issue with C++ debug builds is the resulting executables are unusably slow, because the zero-cost abstractions are not zero cost in debug builds.

          • arw0n 40 minutes ago
            I think this also massively depends on your domain, familiarity with the code base and style of programming.

            I've changed my approach significantly over time on how I debug (probably in part due to Rusts slower compile times), and usually get away with 2-3 compiles to fix a bug, but spend more time reasoning about the code.

          • greener_grass 2 hours ago
            If you are not willing to make this trade then how much of a priority was run-time performance, really?
            • esrauch 1 hour ago
              It's never the case that only one thing is important.

              In the extreme, you surely wouldn't accept a 1 day or even 1 week build time for example? It seems like that could be possible and not hypothetical for a 1 week build since a system could fuzz over candidate compilation, and run load tests and do PGO and deliver something better. But even if runtime performance was so important that you had such a system, it's obvious you wouldn't ever have developer cycles that take a week to compile.

              Build time also even does matter for release: if you have a critical bug in production and need to ship the fix, a 1 hour build time can still lose you a lot here. Release build time doesn't matter until it does.

          • kace91 1 hour ago
            Doesn’t rust have incremental builds to speed up debug compilation? How slow are we talking here?
            • steveklabnik 1 hour ago
              Rust does have incremental rebuilds, yes.

              Folks have worked tirelessly to improve the speed of the Rust compiler, and it's gotten significantly faster over time. However, there are also language-level reasons why it can take longer to compile than other languages, though the initial guess of "because of the safety checks" is not one of them, those are quite fast.

              > How slow are we talking here?

              It really depends on a large number of factors. I think saying "roughly like C++" isn't totally unfair, though again, it really depends.

            • esrauch 1 hour ago
              People do have cold Rust compiles that can push up into measured in hours. Large crates often take design choices that are more compile time friendly shape.

              Note that C++ also has almost as large problem with compile times with large build fanouts including on templates, and it's not always realistic for incremental builds to solve either especially time burnt on linking, e.g. I believe Chromium development often uses a mode with .dlls dynamic linking instead of what they release which is all static linked exactly to speed up incremental development. The "fast" case is C not C++.

              • embedding-shape 1 hour ago
                > I believe Chromium development often uses a mode with .dlls dynamic linking instead of what they release which is all static linked exactly to speed up incremental development. The "fast" case is C not C++.

                Bevy, a Rust ECS framework for building games (among other things), has a similar solution by offering a build/rust "feature" that enables dynamic linking (called "dynamic_linking"). https://bevy.org/learn/quick-start/getting-started/setup/#dy...

          • therealdkz 1 hour ago
            [dead]
      • cogman10 1 hour ago
        AFAIK, it's not the traits that does it but rather the generics.

        Rust does make it a lot easier to use generics which is likely why using more traits appears to be the cause of longer build times. I think it's just more that the more traits you have, the more likely you are to stumble over some generic code which ultimately generates more code.

        • embedding-shape 1 hour ago
          > AFAIK, it's not the traits that does it but rather the generics.

          Aah, yes, that sounds more correct, the end result is the same, I failed to remember the correct mechanism that led to it. Thank you for the correction!

  • jkarneges 7 minutes ago
    > Some people have reported that, thanks to Rust’s checks, they are more willing to write code that’s a bit more dangerous than in the equivalent C (or C++)

    I rewrote a C project in Rust some years ago, and in the Rust version I included many optimizations that I probably wouldn't have in C code, thanks to the ability to do them "fearlessly". The end result was so much more performant I had to double check I didn't leave something out!

  • mid-kid 1 hour ago
    I almost ignored this post because I can't stand this particular war, where examples are cherry picked to prove either answer.

    I'm very happy to see the nuanced take in this article, slowly deconstructing the implicit assumptions proposed by the person asking this question, to arrive at the same conclusion that I long have. I hope this post reaches the right people.

    A particular language doesn't have a "speed", a particular implementation may have, and the language may have properties that make it difficult to make a fast implementation (of those specific properties/features) given the constraints of our current computer architectures. Even then, there's usually too many variables to make a generalized statement, and the question often presumes that performance is measured as total cpu time.

    • steveklabnik 1 hour ago
      I will admit the title was a bit of a gamble, but thank you for taking the time to read it and I'm glad that you enjoyed it in the end.
      • jibal 3 minutes ago
        We recently had a post here where the claim being refuted was in quotes in the title, but half the comments were as if the article were making the claim, clearly indicating that people didn't read it (and don't understand how quote marks work).
  • pizlonator 20 minutes ago
    It's not binary. If you try hard enough, I bet you can make an argument that C is faster and you can make an argument that Rust is faster.

    There is a set of programs that you can write in C and that are correct, that you cannot write in Rust without leaning into unsafe code. So if by "Rust" we mean "the safe subset of Rust", then this implies that there must be optimal algorithms that can be written in C but not in Rust.

    On the other hand, Rust's ownership semantics are like rocket fuel for the compiler's understanding of aliasing. The inability of compilers to track aliasing precisely is a top inhibitor of load elimination in C compilers (so much so that C compiler writers lean into shady nonsense like strict aliasing, and even that doesn't buy very much precision). But a Rust compiler doesn't need to rely on shady imprecise nonsense. Therefore, there are surely algorithms that, if written in a straightforward way in both Rust and C, will be faster in Rust. I could even imagine there are algorithms for which it would be very unnatural to write the C code in a way that matches Rust's performance.

    I'm purely speaking theoretically, I have no examples of either case. Just trying to provide my PL/compiler perspective

    • umanwizard 18 minutes ago
      > There is a set of programs that you can write in C and that are correct, that you cannot write in Rust without leaning into unsafe code. So if by "Rust" we mean "the safe subset of Rust"

      Well, unsafe rust is part of rust. So no, we don’t mean that.

  • kibwen 2 hours ago
    I like to say that there are two primary factors when we talk about how "fast" a language is:

    1. What costs does the language actively inject into a program?

    2. What optimizations does the language facilitate?

    Most of the time, it's sufficient to just think about the first point. C and Rust are faster than Python and Javascript because the dynamic nature of the latter two requires implementations to inject runtime checks all over the place to enable that dynamism. Rust and C simply inject essentially zero active runtime checks, so membership in this club is easy to verify.

    The second one is where we get bogged down, because drawing clean conclusions is complicated by the (possibly theoretical) existence of optimizing compilers that can leverage the optimizability inherent to the language, as well as the inherent fragility of such optimizations in practice. This is where we find ourselves saying things like "well Rust could have an advantage over C, since it frequently has more precise and comprehensive aliasing information to pass to the optimizer", though measuring this benefit is nontrivial and it's unclear how well LLVM is thoroughly utilizing this information at present. At the same time, the enormous observed gulf between Rust in release mode (where it's as fast as C) and Rust in debug mode (when it's as slow as Ruby) shows how important this consideration is; Rust would not have achieved C speeds if it did not carefully pick abstractions that were amenable to optimization.

    • steveklabnik 1 hour ago
      I like this framing a lot.

      It's also interesting to think about this in terms of the "zero cost abstractions"/"zero overhead abstractions" idea, which Stroustrup wrote as "What you don't use, you don't pay for. What you do use, you couldn't hand code any better". The first sentence is about 1, and the second one is about what you're able to do with 2.

    • bluGill 1 hour ago
      Is Javascript significantly slower? It is extremely common in the real world and so a lot of effort has gone into optimizing it - v8 is very good. Yes C and Rust enable more optimizations: they will be slightly faster, but javascript has had a lot of effort put into making it run fast.
      • kibwen 23 minutes ago
        Yes. V8 (and other Javascript JIT engines) are very good, with a lot of effort put into them by talented engineers. But there's a floor on performance imposed by the language's own semantics. Of course, if your program is I/O bound rather than CPU bound (especially at network-scale latencies), this may never be noticeable. But a Javascript program will use significantly more CPU, significantly more memory, and both CPU and memory usage will be significantly more variable and less predictable than a program written in C or Rust.
      • sgeisenh 1 hour ago
        Yes, for most real-world examples JavaScript is significantly slower; JIT isn’t free and can be very sensitive to small code changes, you also have to consider the garbage collector.

        Speed is also not the only metric, Rust and C enable much better control over memory usage. In general, it is easier to write a memory-efficient program in Rust or C than it is in JS.

    • AnimalMuppet 1 hour ago
      I think there's a third question, but I don't know quite how to phrase it. Maybe "how real-world fast is the language?" or "how fast is the language in the hands of someone who isn't obsessively thinking about speed?"

      That is, most of the time, most of the users aren't thinking about how to squeeze the last tenth of a percent of speed out of it. They aren't thinking about speed at all. They're thinking about writing code that works at all, and that hopefully doesn't crash too often. How fast is the language for them? Does it nudge them toward faster code, or slower? Are the default, idiomatic ways of writing things the fast way, or the slow way?

  • HarHarVeryFunny 1 hour ago
    In general "Is programming language X faster than Y" is a meaningless question. It mostly comes down to specific implementations - specific compilers, interpreters, etc.

    The only case where one language is likely to be inherently faster than another is when the other language is so high level or abstracted away from the processors it is going to run on that an optimizing compiler is going to have a hard time bridging that gap. It may take more work for an optimizing compiler to generate good code for one language than another, for example by having to recognize when aliasing doesn't exist, but again this is ultimately a matter of implementation not language.

    • gpderetta 27 minutes ago
      Language design still has a huge impact on which optimizations are practically implementable.

      The Mythical Sufficiently Smart Compiler is, in fact, still mythical.

      • HarHarVeryFunny 2 minutes ago
        Sure, but not all compilers are created equal and are going to go to the same lengths of analysis to discover optimization opportunities, or to have the same quality of code generation for that matter.

        It might be interesting to compare LLVM generated code (at same/maximum optimization level) for Rust vs C, which would remove optiizer LOE as a factor and more isolate difficulties/opportunities caused by the respective languages.

  • pron 2 hours ago
    The question is what do we mean by "a fast language"? We could mean it to be how fast the fastest code that a performance expert in that language, with no resource constraints, could write. Or, we can restrict it to "idiomatic" code. Or we can say that a fast language is the one where an average programmer is most likely to produce fast code with a given budget (in which case probably none of the languages mentioned here are among the fastest).
    • jillesvangurp 1 hour ago
      It's compilers and compiler optimizations that make code run fast. The real question is if the Rust language and the richer memory semantics it has help the Rust compiler to provide a bit more context for optimizing that the C compiler wouldn't have do unless you hand optimize your code.

      If you do hand optimize your code, all bets are off. With both languages. But I think the notion that the Rust compiler has more context for optimizing than the C compiler is maybe not as controversial as the notion that language X is better/faster than language Y. Ultimately, producing fast/optimal code in C kind of is the whole point of C. And there aren't really any hacks you can do in C that you can't do in Rust, or vice versa. So, it would be hard to make the case that Rust is slower than C or the other way around.

      However, there have been a few rewrites of popular unix tools in Rust that benchmark a bit faster than their C equivalents. Could those be optimized in C. Probably; but they just haven't. But there is a case there of arguing that maybe Rust code is a bit easier to make fast than C code.

      • gf000 11 minutes ago
        > It's compilers and compiler optimizations that make code run fast

        Well, then in many cases we are talking about LLVM vs LLVM.

        > Ultimately, producing fast/optimal code in C kind of is the whole point of C

        Mostly a nitpick, but I'm not convinced that's true. The performance queen has been traditionally C++. In C projects it's not rare to see very suboptimal design choices mandated by the language's very low expressivity (e.g. no multi-threading, sticking to an easier data structure, etc).

    • DoctorOW 2 hours ago
      > we can say that a fast language is the one where an average programmer is most likely to produce fast code with a given budget

      I'd say most people use this definition, with the caveat that there's no official "average programmer", and everyone has different standards.

      • pron 1 hour ago
        Right, but if we assume that programmers' compensation is statistically correlated with their skill, then we can drop "average" and just talk about budget.
        • Avicebron 1 hour ago
          That seems like a wild assumption to make.
          • gf000 10 minutes ago
            Statistically? I don't think it's that wild.

            If you prefer it, salaries correlate with years of experience, and the latter surely correlates with skills, right?

            (No, this doesn't mean that every 10 years XP dev is better than a 3 years XP one, but it's definitely a strong correlation)

    • justin66 2 hours ago
      These are the languages an "average programmer" would use. What language are you thinking of?
      • pron 2 hours ago
        I may be biased, but I think that if you have a budget that's reasonable in the industry for some project size and includes not only the initial development but also maintenance and evolution over the software's lifetime, especially when it's not small (say over 200KLOC), and you want to choose the language that would give you the fastest outcome, you will not get a faster program than if you chose Java. To get a faster program in any language, if possible, would require a significantly higher budget (especially for the maintenance and evolution).
        • xnorswap 1 hour ago
          Do you think C# / .NET doesn't stack up in terms of budget, or not stack up in terms of runtime speed?
          • pron 1 hour ago
            It's probably in the same ballpark. To me, the contenders for "the fastest language" include Java, C#, and Go and not many more.
            • justin66 1 hour ago
              Ah thanks. That clarifies things.
        • cdelsolar 2 hours ago
          Go?
          • pron 1 hour ago
            I don't think so, but it may not be far behind. More importantly, though, I'm fairly confident it won't be Assembly, or C, or C++, or Rust, or Zig, but also not Python, or TS/JS. The candidates would most likely include Java, C#, and Go.
      • swiftcoder 1 hour ago
        Purely by the numbers, an "average programmer" is much more likely to use Javascript, Python, or Java. The native languages have been a bit of a niche field since the late 90's (i.e. heavily slanted towards OS, embedded, and gamedev folks)
  • lionkor 22 minutes ago
    To answer the headline: No. Rust is not faster than C. C isn't faster than Rust either.

    What is fast is writing code with zero abstractions or zero cost abstractions, and if you can't do that (because writing assembly sucks), get as close as possible.

    Each layer you pile on adds abstraction. I've never had issues optimizing and profiling C code -- the tooling is excellent and the optimizations make sense. Get into Rust profiling and opimization and you're already in the weeds.

    Want it fast? Turn off the runtime checks by calling unsafe code. From there, you can hope and pray like with most LLVM compiled languages.

    If you want a stupid fast interpreter in C, you do computed goto, write a comment explaining why its not, in fact, cursed, and you're done. In C++, Rust, etc. you'll sit there examining the generated code to see if the heuristics detected something that ends up not generating effectively-computed-goto-code.

    Not to mention panics, which are needed but also have branching overhead.

    The only thing that is faster in Rust by default is probably math: You have so many more errors and warnings which avoid overflows, casts, etc. that you didn't mean to do. That makes a small difference.

    I love Rust. If I want pure speed, I write unsafe Rust, not C. But it's not going to be as fast as trivial C code by default, because the tradeoffs fundamentally differ: Rust is safe by default, and C is efficient by default.

    The article makes some of the same points but it doesn't read like the author has spent weeks in a profiler combing over machine code to optimize Rust code. Sadly I have, and I'm not getting that time back.

    • steveklabnik 19 minutes ago
      > Want it fast? Turn off the runtime checks by calling unsafe code.

      You can do that for sure, but you can also sometimes write your code in a different way. https://davidlattimore.github.io/posts/2025/09/02/rustforge-... is an interesting collection of these.

      > it doesn't read like the author has spent weeks in a profiler combing over machine code to optimize Rust code

      It is true that this blog post was not intended to be a comprehensive comparison of the ways in which Rust and C differ in performance. It was meant to be a higher level discussion on the nature of the question itself, using a few examples to try and draw out interesting aspects of that comparison.

  • gignico 2 hours ago
    The article does not mention the possible additional optimisation opportunities that arise in Rust code due to stricter aliasing rules of references. But I don’t have an example in mind. Does anyone know of an example of it happening in real code?
    • steveklabnik 2 hours ago
      In the spirit of the article... there's a few ways in which this could go :)

      The first is, we do have some amount of empirical evidence here: Rust had to turn its aliasing optimizations on and off again a few times due to bugs in LLVM. A comment from 2021: https://github.com/rust-lang/rust/issues/54878#issuecomment-...

      > When noalias annotations were first disabled in 2015 it resulted in between 0-5% increased runtime in various benchmarks.

      This leaves us with a few relevant questions:

      Were those benchmarks representative of real world code? (They're not linked, so we cannot know. The author is reliable, as far as I'm concerned, but we have no way to verify this off-hand comment directly, I link to it specifically because I'd take the author at their word. They do not make any claim about this, specifically.)

      Those benchmarks are for Rust code with optimizations turned off and back on again, not Rust code vs C code. Does that make this a good benchmark of the question, or a bad one?

      These were llvm's 'noalias' markers, which were written for `restrict` in C. Do those semantics actually take full advantage of Rust's aliasing model, or not? Could a compiler which implements these optimizations in a different way do better? (I'm actually not fully sure of the latest here, and I suspect some corners would be relying on the stacked borrows vs tree borrows stuff being finalized)

      • Measter 1 hour ago
        Another issue we have to consider here for the measurements taken then is that it was miscompiling, which, to me, calls into question how much we can trust that performance change.

        Additionally, it was 10 years ago and LLVM has changed. It could be that LLVM does better now, or it could do worse. I would actually be interested in seeing some benchmarks with modern rustc.

    • bluGill 2 hours ago
      Many C programs are vailid C++ and are faster when compiled with a C++ compiler because of those stricter aliasing and type rules. Like you though I have no examples.
    • pornel 2 hours ago
      When the optimizer knows writes can't change the reads, it can reorder and coalesce them. The main benefit of that is enabling autovectorization in more cases. Otherwise it saves a few loads here and there.
    • Karliss 1 hour ago
      Not exactly real world, but real code example demonstrating strict aliasing rule in action for C++. https://godbolt.org/z/WvMb34Kea Rust should have even more opportunities of this due to restrictions it has for writable references.

      There are 2 main differences between versions with and without strict aliasing. Without strict aliasing compiler can't assume that the result accumulator doesn't change during the loop and it has to repeatedly read/write it each iteration. With strict aliasing it can just read it to register, do the looping and write the result back at the end once. Second effect is that with strict aliasing enabled compiler can vectorize the loop processing 4 floats at the same time, most likely the same uncertainty of counter prevents vecotorization without strict aliasing.

      If you want something slightly simpler example you can disable vectorization by adding '-fno-tree-vectorize'. With it disabled there is still difference in handling of counter.

      Using restrict pointers and multiple same type input arrays it would probably be possible to make something closer to real world example.

      • steveklabnik 1 hour ago
        Note that Rust does not do strict aliasing, its model is different.

        Also note that C++ does not have restrict, formally speaking, though it is a common compiler extension. It's a C feature only!

    • Tuna-Fish 1 hour ago
      I believe this advantage is currently mostly theoretical, as the code ultimately gets compiled with LLVM which does not fully utilize all the additional optimization opportunities.
      • adgjlsfhk1 1 hour ago
        LLVM doesn't fully utilize all the power, but it does use an increasing amount every year. Flang and Rust have both given LLVM plenty of example code and a fair number of contributors who want to make LLVM work better for them.
  • torginus 1 hour ago
    Well, if we define a reasonable yardstick for this, and try to write idiomatic code without low-level hacks and optimization, Rust has the potential to be faster because its more strict aliasing rules might enable optimizations that the C compiler wouldn't try.

    However I remember reading a few years back that due to the Rust frontend not communicating these opportunities to LLVM, and LLVM not being designed to take advantage of them, the real-world gains do not always materialize.

    Also sometimes people write code in Rust that does not compile under the borrow checker rules, and alleviate this issue either by cloning objects or using RefCell, both of which have a runtime cost.

  • otikik 2 hours ago
    I know which one is faster to produce an unintended segfault.
  • ratmice 1 hour ago
    I feel like another optimization that rust code can exploit is uninhabited types. When combined with generics and sum types these can lead to entire branches being unreachable at the type level. Like Option<!> or Result<T, !>, rust hasn't stablized !, but you can declare them other ways such as an empty enum with no variants.
  • NetMageSCW 56 minutes ago
    That article did not contribute anything to the answer of the question quoted.
  • farnulfo 1 hour ago
    Is there a common pattern for "Is language X faster than language Y" ? Like what is your definition of faster : faster to developer, to start, to execute, to handle different workload with the same binaries (like JIT).
  • umanwizard 19 minutes ago
    One thing I’ve seen MANY times in C that isn’t an issue in rust, is people using slow linked lists because writing a proper btree or hash map takes substantially more effort.
  • taminka 2 hours ago
    struct field alignment/padding isn't part of the C spec iirc (at least not in the way mentioned in the article), but it's almost always done that way, which is important for having a stable abi

    also, if performance is critical to you, profile stuff and compare outputted assembly, more often than not you'll find that llvm just outputs the same thing in both cases

    • steveklabnik 2 hours ago
      Here's the draft of C23: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3220.pdf

      See "6.7.3.2 Structure and union specifiers", paragraph 16 & 17:

      > Each non-bit-field member of a structure or union object is aligned in an implementation-defined manner appropriate to its type.

      > Within a structure object, the non-bit-field members and the units in which bit-fields reside have addresses that increase in the order in which they are declared.

      • taminka 18 minutes ago
        so they're ordered, which i didn't dispute, but alignment is implementation defined, so it could be aligned to the biggest field (like in the article), or packed in whatever (sequential) order the particular platform demands, which was my initial point
        • steveklabnik 0 minutes ago
          Ah, sorry, you're right I forgot about alignment. Yes, alignment is implementation defined, paragraph 16:

          > Each non-bit-field member of a structure or union object is aligned in an implementation-defined manner appropriate to its type.

          But, I still don't think that what you've said is true. This is because alignment isn't decided per-object, but per type. This is 6.2.8 Alignment of objects.

          You also have to be able to take a pointer to a (non-bitfield) member, and those pointers must be aligned. This is also why __attribute__((packed)) and such are non-standard extensions.

          Then again: I have not passed the C specification lawyer bar, so it is possible that I am wrong here. I'm just an armchair lawyer. :)

    • cyco130 2 hours ago
      It is indeed part of the standard. It says "Within a structure object, the non-bit-field members and the units in which bit-fields reside have addresses that increase in the order in which they are declared"[1] which doesn't allow implementations to reorder fields, at least according to my understanding.

      [1] https://open-std.org/JTC1/SC22/WG14/www/docs/n3220.pdf section 6.7.3.2, paragraph 17.

      • taminka 21 minutes ago
        i was talking abt padding/alignment, not ordering, that's indeed not allowed you're right
    • ajross 2 hours ago
      > struct field alignment/padding isn't part of the C spec iirc

      It's part of the ABI spec. It's true that C evolved in an ad hoc way and so the formal rigor got spread around to a bunch of different stakeholders. It's not true that C is a lawless wasteland where all behavior is subject to capricious and random whims, which is an attitude I see a lot in some communities.

      People write low level software to deal with memory layout and alignment every day in C, have for fourty years, and aren't stopping any time soon.

  • IshKebab 1 hour ago
    I think the only reasonable way to interpret this question is "is Rust written by reasonably competent Rust developer spending a reasonable amount of time faster/slower than an equally competent C developer spending the same amount of time".

    I don't think a language should count as "fast" if it takes an expert or an inordinate amount of time to get good performance, because most code won't have that.

    So on those grounds I would say Rust probably is faster than C, because it makes it much much easier to use multithreading and more optimised libraries. For example a lot of C code uses linked lists because they're easy to write in C, even when a vector would be faster and more appropriate. Multithreading can just be a one line change in Rust.

    • kstrauser 44 minutes ago
      Or honestly, anything involving a hashmap. Of course you can write those in C, but it’s enough friction that most people won’t for minor things. In Rust, it’s trivial, so people are more likely to use them.
    • oguz-ismail2 1 hour ago
      So assembly is the slowest language?
      • hmry 1 hour ago
        Depends. If it takes an assembly programmer 8 hours to implement <X>, can an equally proficient Python programmer spending 8 hours to implement <X> create a faster program?

        Let's say they only need 2 hours to get the <X> to work, and can use the remaining 6 hours for optimizing. Can 6 hours of optimizing a Python program make it faster than the assembly program?

        The answer isn't obvious, and certainly depends on the specific <X>. I can imagine various <X> where even unlimited time spent optimizing Python code won't produce faster results than the assembly code, unless you drop into C/C++/Zig/Rust/D and write a native Python extension (and of course, at that point you're not comparing against Python, but that native language).

  • netbioserror 22 minutes ago
    The real question at the core of any production: What's the minimum performance cost we can pay for abstractions that substantially boost development efficiency and maintainability? Just like in other engineering fields, the product tuned to yield the absolute maximum possible value in one attribute makes crippling sacrifices along other axes.
  • senko 2 hours ago
    Interesting post, but read it for the journey, not the destination[0].

    [0] tldr: "I think that there are so many variables that it is difficult to draw generalized conclusions."

  • FrustratedMonky 59 minutes ago
    Sorry, maybe stupid question. But can't this be decided by some benchmarks, using some of the features in the article that purport to make Rust faster?
    • steveklabnik 53 minutes ago
      Not a stupid question :)

      Part of what I'm getting at here is that you have to decide what is in those benchmarks in the first place. Yes, benchmarks would be an important part of answering this question, but it's not just one question: it's a bunch of related but different questions.

  • jonstewart 2 hours ago
    “It’s the memory, stupid!” So wrote Richard Sites, lead designer of the famous DEC Alpha chip, in 1996 (http://cva.stanford.edu/classes/cs99s/papers/architects_look...). It’s rung true for 30 years.

    Where C application code often suffers, but by no means always, is the use of memory for data structures. A nice big chunk of static memory will make a function fast, but I’ve seen many C routines malloc memory, do a strcpy, compute a bit, and free it at the end, over and over, because there’s no convenient place to retain the state. There are no vectors, no hash maps, no crates.io and cargo to add a well-optimized data structure library.

    It is for this reason I believe that Rust, and C++, have an advantage over C when it comes to writing fast code, because it’s much easier to drop in a good data structure. To a certain extent I think C++ has an advantage over Rust due to easier and better control over layout.

    • JacoboJacobi 1 hour ago
      I'd certainly agree that malloc is the Achilles heel of any real world C. Overall though C++ was not a particularly good solution to memory efficiency since having OO available made the situation look like a fast sprint to the cake shop.
  • effnorwood 1 hour ago
    It depends on a lot. If you need to go fast, look to assembly.
    • HarHarVeryFunny 1 hour ago
      As I just posted, any speed comparison needs to be based on specific implementations (compiler A vs compiler B), not languages.

      When it comes to assembly, the "compiler" is the person writing the code, and while assembly gives you the maximum flexibility to potentially equal or outperform any compiler for any language, there are not too many people with the skill to do that, especially when writing large programs (which due to the effort required are rarely written in assembler). In general there is much more potential for improving the speed of programs by changing the design and using better algorithms, which is where high level languages offer a big benefit by making this easier.

  • tycoon666 2 hours ago
    No.
    • steveklabnik 2 hours ago
      I love Betteridge's Law, and so one small thing I was trying to do here was subvert it a bit. Instead of "no," in this case, the answer is "the question is malformed."
  • bjourne 1 hour ago
    Haha, I'm touting my own ten year old horn: https://news.ycombinator.com/item?id=12749717 Also see steveklabnik's comments. They are relevant.

    Back then the C implementation of the (i.e., "one") micro benchmark beat the Rust implementation. I could squeeze out more performance by precisely controlling the loop unrolling. Nowadays, I don't really care and operate under the assumption that "Python is faster than $X and if it is not, it is still fast enough!"

  • jryb 1 hour ago
    This post might get the record for people responding to the title without reading the article. Jeez people, it takes five seconds to discover that it subverts expectations.
  • TZubiri 1 hour ago
    >If we assume C is the ‘fastest language,’ whatever that means

    I agree that it has no meaning. Speed(language) is undefined, therefore there is no faster language.

    I get this often because python is referred to as a slow language, but since a python programmer can write more features than a C programmer in the same time, at least in my space, it causes faster programs in python, because some of those features are optimizations.

    Now speed(program(language,programmer)) is defined, and you could do an experiment by having programmers of different languages write the same program and compare its execution times.

  • mgaunard 1 hour ago
    Betteridge's law of headlines already has an answer to that one.
  • classified 1 hour ago
    > Mozilla tried to parallelize Firefox’s style layout twice in C++, and both times the project failed. The multithreading was too tricky to get right.

    That is a damn good reason to choose Rust over C++, even if the Rust implementation of the "same" thing should be a bit slower.

    • bluGill 1 hour ago
      Only if it is repeatable. We have no information on what they learned in the two failed attempts - it is likely that they learned from the failures and started other architectural changes that enabled the final one to work. As such we cannot say anything about this.

      Rust does have some interesting features, which restrict what you are allowed to do and thus make some things impossible but in turn make other things easier. It is highly likely that those restrictions are part of what made this possible. Given infinite resources (which you never have) a C++ implementation could be faster because it has better shared data concepts - but those same shared data concepts make it extremely hard to reason about multi-threaded code and so humanly you might not be able to make it work.

      • steveklabnik 1 hour ago
        We do have some information: https://youtu.be/Y6SSTRr2mFU?t=361 (linked with the specific timestamp)

        In short, the previous two attempts were done by completely different groups of different people, a few years apart. Your direct question about if direct wisdom from these two attempts was shared, either between them, or used by Stylo, isn't specifically discussed though.

        > a C++ implementation could be faster because it has better shared data concepts

        What concepts are those?

        • bluGill 0 minutes ago
          > What concepts are those?

          Data can be modified by any thread that wants to. It is up to you to ensure that modifications work correctly without race conditions. In rust you can't do this (unsafe aside), the borrow checker enforces data access patterns that can't be proved correct.

          Again let me be clear: the things rust doesn't allow are hard to get correct.

  • uwagar 1 hour ago
    no way bruv.
  • rvz 31 minutes ago
    TLDR: No.

    Betteridge's Law of Headlines, saved you a click.

  • einpoklum 2 hours ago
    tl;dr: Rust officially allows you to write inline assembly so it's fast, but in C it's not officially specified as part of the language. Plus more points which do not actually indicate Rust is faster than C.

    ... well, that's what I get for reading an article with a silly title.

    • steveklabnik 2 hours ago
      That’s not how I would summarize what I wrote, for what it’s worth. My summary would be “the question is malformed, you need to first state what the boundaries are for comparison before you can make any conclusions.” I think this is an interesting thing to discuss because many people assume that the answer to “is x faster than C?” to be “no” for all values of X.
      • bigfishrunning 2 hours ago
        > many people assume that the answer to “is x faster than C?” to be “no” for all values of X.

        This is because C does so little for you -- bounds checking must be done explicitly for instance, like you mention in the article, so C is "faster" unless you work around rust's bounds checking. It reminds me of some West Virginia residents I know who are very proud of how low their taxes are -- the roads are falling apart, but the taxes are very low! C is this way too.

        C is pretty optimally fast in the trivial case, but once you add bounds checking and error handling and memory management its edge is much much smaller (for Rust and Zig and other lowish-level languages)

        • bluGill 1 hour ago
          In the real world the difference is rarely significant assuming great programmings implement great algorithms. However those two assumptions are rarely true.
      • sevensor 2 hours ago
        I read the post to see how you would answer, not because I was unclear about what the answer would be, because the only possible answer here is “sometimes.” I especially like the point that Rust can be faster because it enables you to write different things. As I never tire of getting downvoted for saying, I’ve improved the speed of a program by replacing C with Python, because nobody could figure out how to write the right thing in C. If even Python can do this, it must apply to just about every pair of languages.
    • anonnon 2 hours ago
      The article felt fairly dispassionate and even-handed to me, and I say this as someone who dislikes Klabnik very much and also dislikes the Rust community (especially its insidious, forced MIT rewrites of popular GPL software, with which they also break backwards compatibility). It is worth mentioning that there are certain things about Rust that conceivably could make it faster, e.g., const by default (theoretically facilitating certain optimizations), but in practice, thus far, do not.
  • qsera 2 hours ago
    [flagged]
  • hobofan 2 hours ago
    Off-topic: Is it just me, or have there been a disproportionally high number of ~mid 2025 posts that have been reposted the last few days?
  • voidUpdate 2 hours ago
    Depends what you're doing with it... You can make any language you want slower than another language by using it badly
    • bell-cot 1 hour ago
      True, but also a tautology.

      Instead, I'd say that Rust & C are close enough, speed-wise, that (1) which one is faster will depend on small details of the particular use case, or (2) the speed difference will matter less than other language considerations.

    • philipallstar 2 hours ago
      This is like saying that no car is faster than any other because it depends what gear you drive it in
      • voidUpdate 1 hour ago
        That's true as well. Language speed depends on how you use it