Posts

This profile is from a federated server and may be incomplete. Browse more on the original instance.

galdor, to random
@galdor@emacs.ch avatar

In #Go, you cannot call the String method on a literal URL struct ("cannot call pointer method String on url.URL") because the String method has a pointer receiver. String does not modify the object, but it uses a pointer receiver to avoid copying the object for each call.

This is what you get when 1/ you design a language with pointers (why would you do that in 2009?) and 2/ you do not have "const".

Just bad design.

galdor,
@galdor@emacs.ch avatar

@holgerschurig

Modern Common Lisp implementations also have typed foreign types, this is not the wild west ;) Pointers are not "emulated", I honestly have no idea what you imagine. There are a lot of issues with Common Lisp, and I'm the first one to complain about them, but working with C libraries has always been very comfortable compared to what it is in Ruby, Erlang, or even Go (well, CGo).

I do not frown on pointers in general (I wrote way too much C professionally for that), but they do not make sense in Go: there is a GC and pointer arithmetic is not available! There is nothing useful you can do with them, and they cause so many headaches.

holgerschurig,

@galdor

Hmm, you're not easy to read --- for me.

Earlier you wrote that "pointers are memory addresses, which are simply integers". And more importantly, after I mentioned Lisp and that maybe it has trouble calling system calls.

And now you write "Modern Common Lisp implementations also have typed foreign types"

So accept what I wrote was based on unclear information transfer :-)

galdor, to emacs
@galdor@emacs.ch avatar

While the LSP protocol is useful for completion or access to symbol definitions, some of its features are less appealing. In , you can instruct Eglot to ignore any feature you dislike.

E.g. (setq eglot-ignored-server-capabilities '(:inlayHintProvider)) to remove annoying hints mixed with the code in c-mode with clangd.

tetrislife,

@galdor is that the one that puts formal parameter names next to the actuals in calls? It hasn't seemed off-putting on the occasions I have used it.

galdor, to random
@galdor@emacs.ch avatar

I had some fun: went through chapter 2 (Syntax) of the standard and bootstrapped a reader. I learned a few things:

  • Pure C99 is (still) easy to read and to write.
  • The standard is very well written: just do what is described and it works.
  • If you did not read the standard, there are a lot of things you do not know about Common Lisp.

Now I need minimal support for arbitrary-precision arithmetic, and I do not want any dependency.

galdor, to random
@galdor@emacs.ch avatar

I was playing again with my HTTP implementation and I've made my peace with its blocking nature. It is irrelevant with a buffering reverse proxy (HAProxy or NGINX), performances are excellent and the code of the server stays simple.

Here, 210k+ req/s on 64 connection handling threads with 390µs P90 latency and sub-ms P99 (Linux amd64, SBCL 2.4.1).

Of course I can't use the same approach for my SMTP server (too many parallel connections that stay alive and no buffering proxy possible), but not all software have to use the same language.

louis,
@louis@emacs.ch avatar

@galdor Impressive stats! I'm curious, with "blocking nature" you mean that all these connections are handled within the same thread?

galdor, (edited )
@galdor@emacs.ch avatar

@louis No, there are 64 threads handling connections, i.e. reading requests and writing responses. However these read and write operations are blocking, meaning that they only return once data have been read/written. Without buffering, if a client opens 64 connections and does not send anything, the server will lose the ability to process requests.

With a buffering reverse proxy, e.g. NGINX, the proxy manages connections in a non-blocking way (usually with epoll or kueue), without requiring one thread per connection, and accumulates requests until they are complete, then only forward them to the real server. No risk of blocking. Of course in theory I could write the HTTP server in a non-blocking way in Common Lisp, but then we're back to the lack of green threads, callback hell, etc.

galdor, to random
@galdor@emacs.ch avatar

I love SQLite, but everytime I see someone claiming you do not need PostgreSQL, I feel the need to remind them that SQLite has very limited type handling and that you'll seriously miss row locking as soon as you want HA. And it's just the tip of the iceberg.

loke,
@loke@functional.cafe avatar

@galdor I think most people know that Postgres is much more capable when the workload gets over a certain size.

The problem is that most people also don't know where that size is.

In fact, I've used SQLite as well as Postgres and many other databases for about 30 years now, and I would struggle to answer that question outside the obvious: If you have a lot of parallel access it's time to look at Postgres.

galdor,
@galdor@emacs.ch avatar

@loke My point is that every debate on the subject is focused on performances, which is mostly irrelevant unless you have a ton of concurrent writes.

What people do not realize is how many features SQLite is missing compared to PostgreSQL. And do not get me started on the brain dead idea of allowing NULL primary keys…

galdor, to random
@galdor@emacs.ch avatar

Some say you must start with a simple programming language and learn your way up. Others tell you to learn a low level language such as C to understand how everything works.

I've seen plenty of developers who started with Python or JS, and some who started with C. Comparing them, there is no doubt about which method yields the best developers.

The good news is that it's never too late to go back to the fundamentals.

oantolin,
@oantolin@mathstodon.xyz avatar

@galdor Another approach illustrated by great books like Djikstra's "A Method of Programming" or Gries's "The Science of Programming" or Morgan's "Programming from Specifications" is to learn programming in the abstract using some semi-informal pseudocode notation. After all, what do you need a real programming language for before you know how to program? (I'm only about 80% kidding —I recognize the value of experimenting on the computer while learning to program— but that 20% is maybe what you'd expect from a mathematician).

tetrislife,

@oantolin
> semi-informal pseudocode
There is nowadays, a very intriguing option as a first "design language".
@galdor
Another approach might be to first learn how to write and execute a test plan (maybe the Ruby Cucumber way, or with a tool).

galdor, to random
@galdor@emacs.ch avatar

Firefox tip of the day: create/update "userChrome.css" in the "chrome" subdirectory of your profile directory and add:

moz-input-box.urlbar-input-box {
font-family: monospace !important;
}

Do not forget to set "toolkit.legacyUserProfileCustomizations.stylesheets" to "true" in "about:config" (blame Mozilla for this BS).

Because yes, URIs should be readable in your browser.

galdor, to emacs
@galdor@emacs.ch avatar

The value of is not in the packages that are available (Gnus, org-mode, Magit, etc.). It is the fact that these packages live in the same application, manipulate text the same way, and can interact with each other to do exactly what you want them to do.

grinn,
@grinn@emacs.ch avatar

@galdor one part of this that always blows my mind is that buffers are a primitive data type yet the whole point of Emacs is to display buffers. Everything in Emacs is just changing the display of, and operating on, buffers.

To say it a different way, you can't construct a buffer from more basic Elisp elements. A buffer is the most basic data type, on par with a symbol or a literal. Yet it's also the main way users and developers interact with each other.

I don't know of another project where the developers and the users are all using the same concept as a building block. It would be like if matlab was written entirely using arrays.

al3x,
@al3x@hachyderm.io avatar

@galdor What is amazing to me is how extensible both and are while using completely differently approaches.

And how little modern editors have learned from them. I don't want to poopoo on modern editors as some are really good and cool. But very few have been build on this fundamental philosophy of extensibility.

galdor, to random
@galdor@emacs.ch avatar

There is a fundamental misconception about how software licenses and copyright work. Since Redis Labs relicensed Redis, I've read multiple times that it would not have happened if Redis had been GPL-licensed. This is incorrect. The entity (or entities if they all agree when there are multiple copyright holders) owning the software, company or individual, can relicense it at any time, and no license can change that.

Using the GPL license would also not have prevented Amazon from making money by hosting Redis (nothing wrong with that btw). Making it AGPL would have forced them to release any modification, but then again if it was AGPL, lots of companies would not have touch it (for good reasons).

There is absolutely nothing stopping anyone from forking the last BSD-licensed release of Redis and continuing the work with another name ("Redis" is a trademark that belongs to Redis Labs).

galdor, to random
@galdor@emacs.ch avatar

I've seen too many managers complain about engineers being "difficult" for being exigent about work environment and tools. Developers are not machines, and their tools are important to them even if you do not understand why.

(source: @dhh in https://world.hey.com/dhh/finding-the-last-editor-dae701cc)

louis,
@louis@emacs.ch avatar

@galdor I fear that the machinification of devs will ever increase with the deployment of GPTs in corporations. Execs will expect that devs use GPTs first and then fix whatever it throws at them, in whatever tools Microsoft will mandate to be used.

And when things go wrong, it will still be the devs' fault. Where does that lead us? Bright devs will part with their job, and offshore outsourcing will see a big revival. Not a very positive outlook for our industry.

galdor,
@galdor@emacs.ch avatar

@louis It's too soon to tell, but I do not expect things to change that much.

Companies mass producing shitty corporate software or run-of-the-mill websites will use and abuse AI code generation the same way they current subcontract (with or without offshore teams). Quality does not matter to them, it's all about cheap and fast.

Companies treating software as an asset will continue hiring as they currently do; AI may be used as a tool (or not, given the obvious liability; time will tell), but will not replace engineers because they work is not just about puking code everywhere.

galdor, to random
@galdor@emacs.ch avatar

REST does not make any sense for HTTP APIs. A request is just data: encode it in the request, use the request path as the name of the operation you are performing, the end. Same thing for responses. Forget about nested routes, query strings, path variables… They just make your life more complicated and do not bring anything to the table.

I've been using this model for multiple HTTP APIs and I only regret I did not realize that sooner. BTW I'm not alone, see the AWS API or the Telegram one.

Also makes writing API clients magnitude order easier.

louis,
@louis@emacs.ch avatar

@galdor @bonifartius @Sophistifunk Agree, REST often makes things more difficult. I moved to RPC-style for internal APIs. The only reason to make it REST is when you expect the requests to be cached by the client, which is not possible for POST requests.

galdor,
@galdor@emacs.ch avatar

@louis @bonifartius @Sophistifunk
API caching at the HTTP layer is a mine field and usually best handled in the application layer. But if you have good reasons to use HTTP caching, nothing is stopping you from accepting both GET and POST for all routes. I do it for convenience.

galdor, to emacs
@galdor@emacs.ch avatar

I find it hilarious that people perceive Firefox as a bastion of freedom when it makes it impossible to install add-ons which have not been signed by Mozilla. You cannot add new signing keys. You cannot event install your own unsigned add-on on your own Firefox instance on your own computer.

Imagine if did not let you install a package if it was not signed by the FSF…

galdor, to programming
@galdor@emacs.ch avatar

First reaction after installing 27 rc1 (kerl build-install 27.0-rc1 27): yes ~"foo" is shorter than <<"foo">>. But 1/ it's a lot less ergonomic to type (try it!) and 2/ the default printer is still going to spit out <<"foo">>.

Using b"foo" would have so much better.

louis,
@louis@emacs.ch avatar

@galdor Too bad that input ergonomics of international keyboard layouts is rarely taken into account in programming language design.

Pipe for Example ("|"), I need to press Right-Alt + 7. Try that with one hand without getting a cramp. German keyboard even worse with {}[] on Right-Alt + 5,6,8,9. Or ~ which needs an additional space to stick (Alt-Right + ^ + Space).

Which is why I find Lisp or Pascal syntax, so refreshing.

galdor,
@galdor@emacs.ch avatar

@louis In general programming in anything but QWERTY is an exercise in frustration. Switching from AZERTY to QWERTY (with compose) 15y ago was one the best choice I ever made for development.

galdor, to random
@galdor@emacs.ch avatar

An unexpected problem with event-based IO in is that it breaks the condition/restart system. E.g. you upload a file using HTTP and a non-blocking client. The state machine to handle the flow (send request, read HTTP 100 response, send body, read reponse, execute callback) is running in the IO thread. If anything signals a condition, it has to be handled in the IO thread, completely decorrelated from the code that initiated the HTTP request.

Extrapolate that to a server running multiple complex IO flows in parallel. This is really not good, the language just does not match the problem.

galdor,
@galdor@emacs.ch avatar

@louis The Go runtime handles IO events. Goroutines yield on IO and function calls (and IIRC regularly to avoid having long computation not using functions hog a core). From your perspective, IO functions block execution, but in reality they just block the current goroutine while others will continue to execute.

So no risk of having inactive sockets blocking your entire server, no callback hell, all computations where some parts require IO are layed down linearly, errors are handled where they are triggered, etc.

This is the right way to do concurrency; Go and Erlang are the only two major languages to do it.

louis,
@louis@emacs.ch avatar

@galdor Thanks for the insights!

galdor, to random
@galdor@emacs.ch avatar

The slices package makes common operations much more compact, this is good. If you're still using 1.20 or older (what's your excuse?), you can use golang.org/x/exp/slices instead.

But there is no Map method. Or Filter (which should always be two functions, Select and Reject). Or Fold. They were proposed more than 2y ago and of course rejected by Go developers because they should be part of a "stream API" (???) which of course never manifested. As it was for packages, it is going to take years to get what is standard in most sane languages.

louis,
@louis@emacs.ch avatar

@galdor There is slices.Delete but unfortunately it modifies the original slice which made it a no-go for my use cases.

I'm still sticking with samber/lo 's functional Go package which has everything you ask for.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • cisconetworking
  • DreamBathrooms
  • mdbf
  • tacticalgear
  • ngwrru68w68
  • magazineikmin
  • thenastyranch
  • InstantRegret
  • Youngstown
  • slotface
  • everett
  • kavyap
  • cubers
  • JUstTest
  • Durango
  • ethstaker
  • GTA5RPClips
  • osvaldo12
  • khanakhh
  • rosin
  • normalnudes
  • tester
  • megavids
  • Leos
  • modclub
  • anitta
  • lostlight
  • All magazines