gregorni, to ProgrammingLanguages
@gregorni@fosstodon.org avatar

Is there a compiled programming language with a Python-like syntax?

janriemer, to rust

It's alive!🎉

I've built a in , compiled it to and integrated it into a app! :awesome:

It's called selecuery.✨

It can transpile X++ select statements into query expressions. If you think "X++" is a typo and you don't have any idea of what I'm talking about, don't worry.😄

Have a look at the video below.

This project is dear to my heart! ❤️ I've started it 2019 for learning .

I think, I've been transpiled during this project as well.🤪

A video showing a web app with two code editors side-by-side. On the left, source code is entered, which looks like an SQL dialect. As the code is entered on the left, the code editor on the right updates in real-time. The right editor shows the SQL-like statement in a very different form, namely as a sequence of method calls on a query object. So it has just transpiled a declarative SQL-like statement into a procedural query expression. You can think of it a bit like C#'s LINQ: LINQ also has a declarative form and a procedural form.

janriemer, (edited ) to javascript

This is mad 🤯

oxc - The Oxidation #Compiler is creating a suite of high-#performance tools for the #JavaScript / #TypeScript language re-written in #Rust - by Boshen:

https://github.com/web-infra-dev/oxc

Its linter is 50 - 100 times faster than #ESLint...

https://github.com/Boshen/bench-javascript-linter

...and its parser is even 2x faster than #swc

https://github.com/Boshen/bench-javascript-parser-written-in-rust

#JS tooling goes brrrrrrrrrrr! 🚀

#RustLang #WebDev #WebDevelopment

thelastpsion, to random
@thelastpsion@bitbang.social avatar

Musing on #ctran.

I'm starting to wonder if there's any point in having the lexer and parser as two separate classes.

Other than testing, the lexer is only ever going to be called by the parser, and only once during the process.

It might be better to just have a lexer-parser class that grabs a file, tokenises it, then (if it's happy with the file it's tokenised) immediately turns it into a tree.

Is there a really good reason why they should be separate classes?

#compiler #objectpascal #oop

hywan, to rust
@hywan@fosstodon.org avatar

Wasmer 3.3 - Running WebAssembly 2.5x faster with JavascriptCore, https://wasmer.io/posts/wasmer-3.3-and-javascriptcore.

Wasmer 3.3 has a new backend which uses JavaScriptCore. Interesting approach. It’s the 4th backend after SinglePass, Cranelift and LLVM.

Wasmer is still a great project but its CEO is a danger. Reminder: https://mnt.io/2021/10/04/i-leave-wasmer/. Its toxic behaviour is still very present.

But the project is great. Thanks to the contributors and the brave employees working there!

ramin_hal9001, to random
@ramin_hal9001@emacs.ch avatar

I have an unhealthy addiction to relatively obscure computers that I probably wouldn't actually use very much. Here is the latest one that the little voice in my head is telling me I need to buy so I can get my fix: the HiFive Pro P550 running the RISC-V ISA:

  • MicroATX form factor
  • 4-core 2.2 GHz
  • 16GB DDR5
  • Gigabit ethernet
  • PCIe expansion slot
  • NVMe

And it should be able to run Guix OS. The thing is, I don't really hack on operating systems or compilers very often, so I would only be using it as an ordinary end-user with the limited software available for it, which I can do right now, and with more available software, using any old x86_64 computer.

So logically, I don't actually need an awesome high-powered RISC-V development board for anything. But that doesn't stop me from seriously considering buying one.

#RISCV #compiler #operatingsystems #softwaredevelopment

thelastpsion, to random
@thelastpsion@bitbang.social avatar

I'm trying to work out where the line is between a lexer (tokeniser) and a parser.

How far should a lexer go before it's doing stuff that a parser should do? Should the lexer have some intelligence about what it's expecting to see next, or what needs to be ignored (e.g. comments)? Or should the lexer just make tokens and leave the rest to be left to the parser?

I'm not building a as such, but the principles are basically the same for a preprocessor.

bread80, to random
@bread80@mstdn.social avatar

Almost all the code generation is table driven. Inc and Dec are one of the exceptions that require code. In this case it's a loop to generate the INC or DEC instructions.

Only thing left to do is to generate add or subtract if the offset is too large. For now I'm stabbing at doing this for offset greater than four. Optimising here is much more complex than it might seem. For example you can INC any register whereas ADD requires A.

janriemer, to rust

The feeling when you bang your head against the wall for 3 hours and then just try something, but don't really believe in it and suddenly all your unit tests pass! 🎉 :awesome:

This is the beauty of - you can just try and guess until it works.😄 It's such a funny experience!

My editor showing the content of the previously mentioned test that is now passing. The most important data that is tested (needs to be transpiled) looks like an SQL statement (but it is not SQL, rather a weird dialect) with a join clause that consists of complicated parentheses and logical operators like "&&" and "||".

swetland, to golang
@swetland@chaos.social avatar

TIL that Go doesn't have bytes.Equal([]byte,string) or strings.Equal(string,[]byte) because as of 2019 the compiler is smart enough to make string([]byte) into a cast rather than a copy (possibly with allocation) when used in these types of comparisons.

I wish there was some central documentation of non-obvious "magical" optimizations like this.

https://go-review.googlesource.com/c/gofrontend/+/170894
https://go-review.googlesource.com/c/go/+/173323

bytes, internal/bytealg: simplify Equal The compiler has advanced enough that it is cheaper to convert to strings than to go through the assembly trampolines to call runtime.memequal. Simplify Equal accordingly, and cull dead code from bytealg. While we're here, simplify Equal's documentation. Fixes

fell, to cpp
@fell@ma.fellr.net avatar

C++ compiler be like:

error LNK2001: unresolved external symbol "public: static class std::unordered_map<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,unsigned int,struct std::hash<class std::basic_string<char,struct std::char_traits<cha r>,class std::allocator<char> > >,struct std::equal_to<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > >,class std::allocator<struct std::pair<class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const ,unsigned int> > > TextureStore::texture_cache" (?texture_cache@TextureStore@@2V?$unordered_map@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@IU?$hash@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@2@U?$equal_to@V?$basic_string@DU?$char_traits @D@std@@V?$allocator@D@2@@std@@@2@V?$allocator@U?$pair@$$CBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@I@std@@@2@@std@@A)

And expect me to casually read that and go "Ah, I see".

bread80, to random
@bread80@mstdn.social avatar

I apologise for not posting this earlier.

#Quiche #compiler is now alive! (At least Conway's variant of alive). The initial version was slow - about four seconds per generation. It was multiplying coordinates for each cell read and write.

The second variant uses offsets into each liner buffer, and only redraws changed cells. It's now running at three to four generations per second.

#Pascal #Z80 #Amstrad

The next generation of the glider.

janriemer, to rust

Uh, ohhh... I think it's time for me to migrate away from #nom v4.2 😮

Yeah, I know, I've procrastinated on this a lot. This will probably be a lot of work and "slow me down" for a bit. 😪 On the upside, though: I can correct all my mistakes along the way (like having spans).

I'll probably migrate to #chumsky, but #winnow also looks really nice. 🙂

chumsky:
https://github.com/zesterer/chumsky

winnow:
https://github.com/winnow-rs/winnow

#Rust #RustLang #selecuery #Migration #OpenSource #Refactoring #Parser #Compiler

bread80, to random
@bread80@mstdn.social avatar

Before Christmas I decided the needed two big refactorings. The first is nearly done: the data tables for operands and primitives.

The OG version had grown confusing due to some poor initial decisions. It also put too much intelligence into the parser regarding the available types for each operator.

The new version allows the parser to scan the table to confirm if an operator can handle the operator types. It can also 'expand' types to find a match...

fell, (edited ) to fediverse
@fell@ma.fellr.net avatar

Okay, I need the swarm intelligence regarding :

Let's say I write a function like this, in C++17 or later:

inline int Calculate(int a, int b) {  
 return a+b;  
}  

I put it in a file called calculate.h and include (and use) it at multiple other places in the code.

Let's assume the function is not inlined at call sites. Due to the inline keyword, the compiler will ensure that Calculate() exists only once. (See https://en.cppreference.com/w/cpp/language/inline)

Question: Will the compiler generate the instructions multiple times, or does it avoid compiling a function body that's already going to be compiled in a different translation unit?

In other words: Do lots of inline functions in header files slow down compilation?

janriemer, to random

While I'm rewriting my from to , I'm actually thinking about writing a nom-to-chumsky transpiler...

➰ 🙃

hywan, to rust
@hywan@fosstodon.org avatar

Faster compilation with the parallel front-end in nightly, https://blog.rust-lang.org/2023/11/09/parallel-rustc.html.

The Rust compiler now has an intraprocess parallelism for its front-end, which allows faster compilation.

> our measurements on real-world code show that compile times can be reduced by up to 50%

That’s an incredible work. Congrats to the contributors!

bread80, to random
@bread80@mstdn.social avatar

I have accidentally invented meta programming :)

Some compiler routines such as sizeof() need to be able to handle a type name as a parameter, for example sizeof(Integer).

I've added a type called TypeDef to handle this. When the parser hits an identifier which is a type name but not a typecast it returns a value of type TypeDef.

gregorni, to rust
@gregorni@fosstodon.org avatar

The Rust is, of course, written in .

How do you compile it?

bread80, to random
@bread80@mstdn.social avatar

This week I added the Peek() and Poke() intrinsics to the . That means I can now write my first non-trivial program.

I spend this morning fixing a few bugs in the parser and code generator and it's successfully generating the assembler file.

The assembler is choking on a couple of issues with identifiers, and the output code has a couple of bugs to do with parameter parsing and result processing.

Very close to working <g>

A section of the output assembler code.

bread80, to random
@bread80@mstdn.social avatar

All of the operators are now passed over to the new data tables and primitive search. I'm moving onto intrinsics. These are small routines with function-like that often generate inline code, such as peek, inp and sizeof.

Many of these have quirks such as accepting multiple types, or a typedef. The quirk of Abs is unsigned values: in doesn't affect them. I could raise an error but it's nicer to fake the data to not generate any code.

An extract from the primitives spreadsheet. The row for Integer type is normal. The next two rows are for Byte and Word input types. The 'Proc' column contains 'empty' which signals the code generator to not generate any code.

janriemer, to rust

How to speed up #Rust compile times by 16x 🚀

In their blog post "Speeding up Rust edit-build-run cycle" David Lattimore shows how you can speed up #RustLang compile times by 16x by just changing some default compiler config:

https://davidlattimore.github.io/working-on-rust-iteration-time.html

Excellent read! Highly recommend!

Make sure to read to the end, as there is a surprise awaiting you - it's pretty wild.

#Compiler #Performance #wild

etchedpixels, to 8bit
@etchedpixels@mastodon.social avatar

Progress on the . The Z8 now passes the test suite and the build coverage test. The test suite is pretty basic so there are probably plenty of bugs left. Code density is not great on the Z8 though. Also added register keywords for arguments to the compiler and split I/D to the linker.
The bytecode output for the also works with a bytecode engine in C, but the 1802 part is a long way off. Might have to stop putting off debugging the 65C816 now and carry on with

deadblackclover, to random
@deadblackclover@functional.cafe avatar
thelastpsion, to random
@thelastpsion@bitbang.social avatar

Class-building time for .

The compiler tutorials I've read don't talk about how to deal with classes and inheritance. I assume that a metaclass has to be built for each class. But should I then store those metaclasses for later use, or do I regenerate them when needed? I assume the former.

Also, my parser doesn't currently check for duplicate classes or methods (inside classes). Should it be in the parser, or should it be part of the thing that builds the output?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines