rust

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

soulsource, in Why Rust is the most admired language among developers?
@soulsource@discuss.tchncs.de avatar

I can only speak out of my own experience, which is mostly C++, C#, C and Rust, but I also know a bit of Haskell, Java, Fortran, PHP, Visual Basic, and, to my deepest regret, also JavaScript.

For additional context: I have been working in game development for the last 7 years, my main language is C++ for Unreal, but I’ve also worked on some Unity projects with C# as main language. Before I switched to game dev I worked in material science, and used C, mostly. I use Rust for my spare time projects, and the game company I work at is planning to introduce it into our Unreal projects some point later this year.

Of all the languages I mentioned above, (Safe) Rust and Haskell are the only ones that have not yet made me scream at my PC, or hit my head against the desk.

So, some of the reasons why I personally love Rust:

  • Rust is extremely simple compared to the other languages I mentioned above. If you read the official introduction you know all you need to write Safe Rust code.
  • Rust’s syntax is elegant. It’s not as elegant as Haskell, but it’s a lot more elegant than any C-based language.
  • Rust is (mostly) type safe. There are (nearly) no implicit conversions.
  • Rust is memory-safe, without the runtime overhead that garbage collected languages incur.
    • This is a bit of a neutral point though. The Rust compiler will complain if you make mistakes in memory management. Unlike in managed languages, you still need to do the memory management by hand, and find a working solution for it.
  • The memory management model of Rust (“borrow checker”) makes data dependencies explicit. This automatically leads to better architecture that reflects dependencies, because if the architecture doesn’t match them, development will become an uphill battle against the borrow checker.
  • Due to the borrow checker, you can use references extensively, and rely on the referenced object to valid, and also that it is up-to-date (because it cannot be muted or go out of scope as long as you hold the reference).
  • Traits are an amazing way to abstract over types. Either at zero-cost (static dispatch), or, in the rare cases where it’s needed, using virtual function tables.
  • Rust aims to have no undefined behaviour. If it compiles the behaviour of the code is well defined.
    • This, together with the borrow checker, ensures that there are (nearly) no “weird bugs”. Where in C++ one quite regularly hits issues that at first glimpse seem impossible, and only can be explained after several days of research on cppreference (“oh, so the C++ standard says that if this piece of code gets compiled on a full moon on a computer with a blue power LED, it’s undefined behaviour”), that almost never happens in Rust.
  • Macros in Rust are amazing. There are macros-by-example that work by pattern-matching, but there are also procedural macros, which are Rust functions that take Rust code as input, and generate Rust code as output. This gives you amazing power, and one of the most impressive examples is the Serde serialization framework, that allows you to add serialization to your data types simply by adding an attribute.
  • Tooling for Rust is pretty good. The Rust compiler is well known for its helpful error messages. The rust-analyzer plugin for Visual Studio Code is great too. (It also works with vim, Qt Creator and others, but the but Visual Studio Code works best imho.)

The points mentioned above mostly apply to Safe Rust though. Unsafe Rust is a different story.

This brings us to the downsides. Rust isn’t perfect. Far from it, actually. Here are some of the things that aren’t great about Rust.

  • No Higher Kinded Types. This is my main issue with Rust. Even C++ has them (as usual for C++ in a horrible un-ergonomic and utterly confusing way). If Rust had Higher Kinded Types, the language could have been simpler still. For instance, there would have been no need for the async keyword in the language itself.
  • Unsafe Rust is hard. In my opinion even harder than C++, because of Rust’s aliasing rules. Unlike C++, Rust doesn’t allow mutable memory aliasing. That’s because mutable aliasing can never happen in Safe Rust, and not supporting it improves performance. This means that when writing Unsafe Rust, one has to be careful about aliasing.
    • Luckily one only rarely needs Unsafe Rust, usually only in order to call functions from other languages. Still, it’s hard, and I’d generally suggest to use an automated code generator like cxx.rs for interfacing with other languages.
  • Interior Mutability. I understand why it exists, but it breaks a lot of the guarantees that make Rust a great language. So, my conclusion is that one should avoid it as much as possible.

However, the upsides clearly outweigh the downsides imho.

tl;dr If a (Safe) Rust program compiles, chances are pretty high that it also works. This makes programming with it quite enjoyable.

jadero,

This, together with the borrow checker, ensures that there are (nearly) no “weird bugs”. Where in C++ one quite regularly hits issues that at first glimpse seem impossible, and only can be explained after several days of research on cppreference (“oh, so the C++ standard says that if this piece of code gets compiled on a full moon on a computer with a blue power LED, it’s undefined behaviour”), that almost never happens in Rust.

Ah yes, the Chaos Theory principal of programming.

You’ve settled my mind on which language to tackle next. There are a couple projects that have been calling my name, one in Go and one in Rust. Strictly speaking, I might be able to contribute to their documentation and tutorials without ever looking at the code (nobody in their right mind would ever accept code from me anyway), but I like to have some idea of what goes on under the hood.

Rust it is.

starman,
@starman@programming.dev avatar

to my deepest regret, also JavaScript

I can relate. And thanks for this high-effort comment.

Lojcs,

For instance, there would have been no need for the async keyword in the language itself.

Can you explain how?

soulsource,
@soulsource@discuss.tchncs.de avatar

First things first: I haven’t fully thought this through, as I haven’t attempted to implement it (yet). It was just an idea I had while working on higher-free-macro.

It wouldn’t yield the same syntax of course, but you could express the flow of the async computation in the terms of a Free Monad based embedded domain specific language. The interpreter for the eDSL in question would then do the equivalent of the async runtimes we have currently.

I could imagine that the syntax could be pretty nice when using the do-notation from higher.

However, since I haven’t tried implementing it, I can’t say for certain that there aren’t any hard walls one could hit, especially related to Rust’s ownership model, or more complex dependency trees.

bluGill,
bluGill avatar

I've been writing C++ for years and I have yet to be burned by undefined behavor. And because it exists the compiler doesn't have to insert some slow if checks for places my code could do different things on different systems.

I run undefined behavior sanitizer on everything. The only time it has ever complained was a case where my platform does define the behavior and I was intentionally relying on that.

words_number,

The existence of undefined behaviour does not at all help performance. Those unnecessary if-checks are mostly a myth and even when they are introduced (e.g. bounds-checks when indexing arrays), they are usually outweight by the advantages of disallowing aliasing (references can be used much more “carelessly” without rutime checks, because these checks happen at compile time by default, comlilers can generally optimize code better because they know more about aliasing of specific data or the lack thereof). In larger, modern c++ projects a lot of smart pointers are used to enforce similar aliasing rules, which are then enforced at runtime though. Generally, the lack of undefined behaviour enables both programmers and compilers to design, architect and optimize code better. There are enough examples out there. Cloudflares in-house proxy solution comes to mind, which is written in rust and easily beats nginx (!!), serving billions of requests per day.

bluGill,
bluGill avatar

https://lists.isocpp.org/std-proposals/2023/08/7587.php gives one example where it does. Rust defines what happens for a case that is clearly nonsense, thus rust needs check for that case (on processors where the CPU does something different) even though if you get into it you have a bug in your code.

words_number,

And, even more importantly: Depending on the use case, that work is not wasted! “You have a bug in your code” is very possible (more unlikely in rust due to its design, but still). If that bug triggers UB, chances are high you habe an exploitable security problem there. If it instead triggers a panic due to rusts checks, the app stopps in a clean way with a decent message and without a security vulnerability.

words_number,

I don’t doubt that you can easily craft micro benchmarks out of very specific cases. My point was, that in real world applications, the advantages outweigh the disadvantages easily! And in a very tight loop of performance critical code where this might not be the case, you can still use unsafe and disable checks very carefully where you control the invariants yourself.

anlumo,

The only problem with that is that llvm, which the Rust compiler uses, is primarily designed for C++. Since this language always has aliasing, the compiler isn’t optimizing well for that situation. I think it’s fixed now, but for the first few years, rustc didn’t even supply the noalias attribute to the optimizer, because it was completely broken.

words_number,

Yes, that optimization is finally enabled now. But even without it, programmers are less defensive when writing rust because of the freedom of UB, so they write more optimal code and use better architectures before the compiler even comes into play. It doesn’t show in micro benchmarks, but in more complex software that has been written in rust from the start it’s pretty obvious.

anlumo,

I think that the excessive use of iterators is the reason for the more performant code. They allow for very good optimizations due to their compile time predictability.

teolan,
@teolan@lemmy.world avatar

The only time it has ever complained was a case where my platform does define the behavior and I was intentionally relying on that.

If by platform you mean target CPU you should be aware that it’s still undefined behaviour and that it could break optimizations, unless your compiler also makes a commitment to define that behavior that is stronger than what the standard requires.

bluGill,
bluGill avatar

I broke the one definition rule by having a symbol in two different .so files. The optimizer can't optimize around this and on Linux the order of loading says who wins. On windows there are different rules, but I forget which.

Of course if the optimizer could make an optimization I would be in trouble, but my build systems ensures that there is no optimizer that gets access to either definition.

Walnut356,
@Walnut356@programming.dev avatar

For downsides, i’d like to add that the lack of function overloading and default parameters can be really obnoxious and lead to [stupid ugly garbage].

A funny one i found in the standard library is in time::Duration. Duration::as_nanos() returns a u128, Duration::from_nanos() only accepts a u64. That means you need to explicitly downcast and possibly lose data to make a Duration after any transformations you did.

They cant change from_nanos() to accept u128 instead because that’s breaking since type casting upwards has to be explicit too (for some reason). The only solution then is to make a from_nanos_u128() which is both ugly, and leaves the 64 bit variant hanging there like a vestigial limb.

crushyerbones,

Where do you work if I may ask? Every game company I worked at was pretty much set in their ways and I’d love to have an excuse to use rust professionally!

soulsource,
@soulsource@discuss.tchncs.de avatar

Before you get overly excited, we plan to introduce it later this year. As in game-dev “plan”, as in “it might be cut or delayed” 😜. What is holding us back is that we need time to get a Rust toolchain set up for all our target platforms, which have certain requirements that the toolchain needs to meet, and time is always a tight resource in game dev.

That said: Our technical director is very adamant at pushing us towards a more functional programming style (his website explains why). If we could, we would go pure functional right now, but it’s really hard to find people who have experience with fully functional languages, and therefore we want to have the next-best thing, which is Rust. (Or F# for Unity projects. We don’t have any Unity projects right now, but we already have used F# in Rescue HQ, for instance.)

And finally, to answer your questions: I work at stillalive studios. Here is a list of our open positions: stillalive.games/careers/Also I can say from personal experience, that the “speculative application” paragraph is definitely true.

tatterdemalion,
@tatterdemalion@programming.dev avatar

Rust and Haskell are the only ones that have not yet made me scream at my PC

As someone who likes Rust and uses it every day, how have you never screamed at your PC as a direct result of the borrow checker or trait solver? Have you never encountered errors such as higher-ranked lifetime error: failed to prove $FOO: Send, which is sometimes actually just a bug in the compiler? Or the classic the trait bound $FOO: $BAR is not satisfied. axum even has a #[debug_handler] macro just to improve this error. I have spent literal days of my life fixing these kinds of errors, when the compiler not only doesn’t provide a solution but fails to pinpoint the cause of the problem.

I can only hope diagnostics continue to improve, because I know they matter to the Rust team.

soulsource,
@soulsource@discuss.tchncs.de avatar

I have seen some errors along those lines (but not exactly those) while working on the Free Monads proof of concept. Especially while trying to come up with a solution that doesn’t require macros (which I didn’t manage in Stable Rust, exactly due to such issues).

I have yet to see them in actual production code though, but maybe I was just lucky up to now?

anlumo, (edited )

Unlike C++, Rust doesn’t allow mutable memory aliasing. That’s because mutable aliasing can never happen in Safe Rust, and not supporting it improves performance. This means that when writing Unsafe Rust, one has to be careful about aliasing.

Note though that it’s perfectly fine to have multiple mutable raw pointers pointing to the same data, as long as there’s no ownership held by any Rust code. The problem only happens if you try to convert them into references.

soulsource,
@soulsource@discuss.tchncs.de avatar

It seems I misunderstood something important here. I’d take that as proof that Unsafe Rust is rarely needed. 😜 A quick test on the Playground shows that indeed, using raw pointers does not yield the wrong result, while using references does: play.rust-lang.org/?version=stable&mode=relea…

Conclusion: Unsafe Rust is not as difficult as I thought.

Alonely0,
@Alonely0@mastodon.social avatar

@soulsource @anlumo dude your whole code is UB. A reference & means that the data behind it never changes while any reference exists, allowing multiple pointers to point at it at the same time (aliasing); whereas a mutable reference &mut means that the data behind may only be read or written by that pointer, i.e. multiple pointers (aliasing) can't exist. The compiler uses this to optimize code and remove stuff that you promise never happens. Always use miri, and go read the nomicon.

soulsource,
@soulsource@discuss.tchncs.de avatar

That was how I thought it works until yesterday. And Miri seems to confirm what I thought.

But then there was this comment, that suggested otherwise: discuss.tchncs.de/comment/2544085

Thanks for correcting my worldview, because after that playground behaved as it should if aliasing were allowed my worldview was kinda shattered. Oh, and I had completely forgotten that Playground has Miri built in.

anlumo,

I left something important out from my explanation. Your example still holds ownership of the data, so that’s where the rules are violated with those raw pointers. You have to use Box::into_raw or something similar to disassociate the data from the Rust compiler. Then you can alias it using raw pointers.

TehPers,

If you run Miri on your code (Tools -> Miri in the Playground), it actually seems to complain about UB. I’m not experienced enough with unsafe rust to translate that error message to something meaningful though.

Edit: Wait, that’s the while_here_it_isnt method. I’m clearly tired…

soulsource,
@soulsource@discuss.tchncs.de avatar

Until yesterday I wouldn’t have expected either to be sane. But then I got the reply above, that aliasing pointers is fine. The playground link is how I interpreted that statement.

So, if my previous intuition was correct, how is discuss.tchncs.de/comment/2544085 to be interpreted?

TehPers, (edited )

Edit: Lemmy decided to completely butcher my comment, so I’ve replaced all the ampersands with %. Sorry, this will look a bit funny.

You (and they) are right that aliasing pointers is fine. I was running Miri on your playground link, and it gave the expected results. I was just too tired to realize that it was saying your failure case (where you did multiple mutable aliasing with borrows) caused UB and that your success case (where you did multiple mutable aliasing with pointers) did not cause UB.

Generally speaking, the rules around aliasing only apply to borrows in Rust, from my understanding. Any code that creates two %mut borrows of the same value is immediate UB. Any code that could possibly cause that to happen using safe code is unsound. Since your method operates only on the raw pointers, no aliasing rules have been broken, however the compiler also can’t optimize around your code the same way it could had you used regular borrows (assuming it’s possible). At a lower level, this is reflected by the compiler telling LLVM that %mut T values (usually) are not aliased, and LLVM applies optimizations around that. (Note that https://doc.rust-lang.org/std/cell/struct.UnsafeCell.html is a bit of a weird case, but is fundamental to how the other cell types work.)

This is actually why shared pointers like Rc and Arc only give you shared borrows (%) of the values contained in them, and why you’re required to implement some kind of interior mutability if you want to mutate the shared values. The shared pointer cannot guarantee that two borrows of the same value are not active at the same time, but does allow for shared ownership of it. The Cell/RefCell/Mutex/etc types verify that there is only one active %mut T (unique borrow) of the inner value at a time (or in Cell’s case even allows you to mutate without ever receiving a %mut T).

Note that while %T and %mut T are often referred to as “immutable” and “mutable” references, it’s probably more accurate to refer to them as “shared” and “unique” references. Mutability is not actually tied to whether you have a %T or a %mut T. This is trivially shown by looking at the Atomic* types, which only require a %self for their store operation.

dr_robot, in What are you working on this week? (Mar. 2, 2024)
dr_robot avatar

I'm working on a music collection manager with a TUI for myself. I prefer to buy and own music instead of just streaming and I have a selhosted server with ZFS and backups where I keep the music and from which I can stream or download to my devices. There are websites which help you keep track of what you own and have wishlists, but they don't really satisfy my needs so I decided to create my own. Its main feature is to have an easier overview of what albums I own and don't own for the artists I'm interested in and to maintain a wishlist based on this for my next purchases. I'm doing it in Rust, because it's a hobby project and I want to get better at Rust. However, it has paid off in other ways. The type system has allowed me to create a UI that is very safe to add features to without worrying about crashes. Sometimes I actually have to think why it didn't crash only to find that Rust forced me to correctly handle an optional outcome before even getting to an undefined situation.

crispy_kilt, in I found a tool that allows compiling Rust to JVM bytecode (make JAR files from Rust, if you dare?)

But why

scarilog,

Why not?

sik0fewl, in I found a tool that allows compiling Rust to JVM bytecode (make JAR files from Rust, if you dare?)

Finally, the speed of the Rust compiler with the memory efficiency of the JVM!

crispy_kilt, in Examples are not Documentation

Uhm. Yes they are.

blazebra, in How hard can generating 1024-bit primes really be?

Nice article, I enjoyed it. Why float sqrt has been used? Integer sqrt is way faster and easily supports integers of any lengths

farcaster, (edited )

The builtin u64.isqrt seems to be available in nightly only, and additionally I guess the author didn’t want to use any external crates as part of their self-imposed challenge. Though I think there may be an off-by-one result with f64.sqrt I don’t think this functionally breaks their u64 code because they loop to root_n + 1.

doc.rust-lang.org/std/primitive.u64.html#method.i…

blazebra,

Algorithm is so plain and simple, it doesn’t require nightly or Rust specifically to implement.

farcaster, (edited )

Well, yeah, but you asked why they didn’t use integer sqrt. It’s something many programming languages just don’t have. Or if they do, it’s internally implemented as a sqrt(f64) anyway, like C++ does.

Most CPUs AFAIK don’t have integer sqrt instructions so you either do it manually in some kind of loop, or you use floating point…

blazebra,

Integer sqrt is usually not a library function and it’s very easy to implement, just a few lines of code. Algorithm is well defined on Wikipedia you read a lot. And yes, it doesn’t use FPU at all and it’s quite fast even on i8086.

farcaster,

I doubt doing it in software like that outperforms sqrtss/sqrtsd. Modern CPUs can do the conversions and the floating point sqrt in approximately 20-30 cycles total. That’s comparable to one integer division. But I wouldn’t mind being proven wrong.

blazebra,

Integer sqrt can be used for integers with any length, not only for integers fit into f64

zinderic, in Rust Mastery book bundle from Humble Bundle

I bought the bundle looking for some wasm, async and web Rust resources. I’m not impressed. The books quality is low and most of it is a waste of time I would have used more effectively with just documentation. It’s also true that there are some C++ and C# books in there which baffles me. I’ve actually started reading one of the wasm books only to realize it’s a C++ one few minutes later 🤣

Once again I see that I should stay away from Packt.

I think I’ll go grab Rust for Rustaceans now instead of all those in the bundle.

BB_C, in Rusty Playdate - Toolset and API

You should have kept this in the microblogtard platform of your choice.

superbirra, in Rusty Playdate - Toolset and API

All going great, moving to stabilization step by step, but I’m tired of looking at the dull 70⭐️. Seriously, could you please throw me into the trends of the github, please! ❤️‍🔥

How can you at almost 40 years old not be embarrassed by such a post?

fzz, (edited )
@fzz@programming.dev avatar

I just don’t care, I’m only 40 years old, it’s a kid’s age.

blazebra,

Could you please explain why he should be embarrassed by such post? What would you improve?

Nereuxofficial, (edited ) in What are you working on this week? (May. 05, 2024)

A general purpose memory allocator although this is really much a work in progress i think there are some good opportunities for otimization in a memory allocator for rust.

For example Rust gives you the size of memory region to free, which means the allocator does not have to track that.

nebeker, in What are you working on this week? (May. 05, 2024)

Just learning. I threw together a little CRUD API in Rocket the other day.

Now I’m playing around with Diesel. I don’t love the intermediate New types, coming from EF Core. Is that because Rust ~~~~doesn’t really have constructors?

sugar_in_your_tea,

Can you give an example? Pretty much everything in Diesel is abstracted away through trait macros.

nebeker, (edited )

The insert on their Getting Started guide.


<span style="color:#323232;">let new_post = NewPost { title, body };
</span><span style="color:#323232;">
</span><span style="color:#323232;">diesel::insert_into(posts::table)
</span><span style="color:#323232;">    .values(&new_post)
</span><span style="color:#323232;">    .returning(Post::as_returning())
</span><span style="color:#323232;">    .get_result(conn)
</span><span style="color:#323232;">    .expect("Error saving new post")
</span>

Of course the other possibility is this guide is very low on abstractions.

sugar_in_your_tea,

Ah, I see. So you’re expecting to have one object for creation, updates, queries, etc.

I work with something like that at work (SQLAlchemy in Python), and I honestly prefer the Diesel design. I build an object for exactly what I need, so I’ll have a handful of related types used for different purposes. In Python, we have a lot of “contains_eager” calls to ensure data isn’t lazy loaded, and it really clutters up the code. With Diesel, that’s not necessary because I just don’t include data that I don’t need for that operation. Then again, I’m not generally a fan of ORMs and prefer to write SQL, so that’s where I’m coming from.

nebeker,

One of my main concerns with this is the potential for making a lot of separate calls to the DB for a complex data structure. Then again there are trade offs to any architecture.

sugar_in_your_tea, (edited )

Isn’t the reverse true? If you make separate models for each query, the ORM knows exactly what data you need, so it can fetch it all as once. If you use generic models, the ORM needs to guess, and many revert to lazy loading if it’s not sure (i.e. lots of queries).

That’s at least my experience with SQLAlchemy, we put a lot of effort in to reduce those extra calls because we’re using complex, generalized structures.

fnmain, in What are you working on this week? (May. 05, 2024)

Working on a static site generator for a publication coming out of my high school

Cwilliams, in What are you working on this week? (May. 05, 2024)

I just started writing a programming language! I’m using pest for the grammar/lexing, which makes if super easy. I built my ast using enums and structs, which makes me appreciate Rust even more. I also am coming to love the way rust handles errors. For example, when there’s an error converting from pest -> ast, it feeds all of the error into into my error type, which is so much easier to handle than making it panic out of the function

solrize, in How hard can generating 1024-bit primes really be?

This is a pretty lame article. The idea is just use a bignum library, or a language with native bignums. While a few optimizations help, basically just generate random 1024 bit random numbers until you something that passes a pseudoprime test, and call it a day. The rest of the article converts the above into a beginning Rust exercise but I think it’s preferable to not mix up the two.

From the prime number theorem, around 1/700th of numbers at that size are prime. By filtering out numbers with small divisors you may end up doing 100 or so pseudoprime tests, let’s say Fermat tests (3**n mod n == 3). A reasonable library on today’s machines can do one of those tests in around 1ms, so you are good.

RSA is deprecated in favor of elliptic curve cryptography these days anyway.

farcaster,

The author pointed out they also could’ve just called openssl prime -generate -bits 1024 if they weren’t trying to learn anything. Rebuilding something from scratch and sharing the experience is valuable.

solrize,

There’s two things going on in the exercise: 1) some introductory Rust programming; 2) some introductory math and crypto.

Maybe it’s just me but I think it’s better to separate the two. If you’re going to do a prime number generation exercise, it will be easier in (e.g.) Python since the bignum arithmetic is built in, you don’t have all the memory management headache, etc. If you’re going to do a Rust exercise, imho it is better to focus on Rust stuff.

farcaster,

There isn’t even any memory management in their code. And arguably the most interesting part of the article is implementing a bignum type from scratch.

anlumo, in Examples are not Documentation

clap and bevy are big offenders there. It’s really hard to learn how to use them due to this.

tatterdemalion,
@tatterdemalion@programming.dev avatar

Are you kidding me? Clap has some of the best documentation of any crate.

anlumo,

I just checked again, and apparently they finally added some documentation since I last checked. The section about the macro stuff just used to say “look at the examples”.

tatterdemalion,
@tatterdemalion@programming.dev avatar

Ah that explains it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • rust@programming.dev
  • ngwrru68w68
  • DreamBathrooms
  • cisconetworking
  • mdbf
  • InstantRegret
  • magazineikmin
  • Youngstown
  • thenastyranch
  • rosin
  • slotface
  • Durango
  • GTA5RPClips
  • kavyap
  • cubers
  • megavids
  • tacticalgear
  • tester
  • modclub
  • khanakhh
  • everett
  • normalnudes
  • osvaldo12
  • provamag3
  • Leos
  • ethstaker
  • anitta
  • JUstTest
  • lostlight
  • All magazines