Crell, (edited ) to php
@Crell@phpc.social avatar

I'm noodling with a data storage layer library in . specific. There's 2 options:

  1. Auto-generate SQL tables/views/queries off of PHP data types (with attributes)
  2. Auto-generate PHP classes off of SQL tables/views/queries.

Which would you prefer? The goal is fully typed interaction in PHP space, but I'm not sure which side should be canonical.

Which would you rather work with, and why? Assuming a "good" DX in either case.

Answer why in replies, boost for reach, etc.

louis, to webdev
@louis@emacs.ch avatar

In the last few days I’m experimenting with substituting CRUD API code with Stored Procedures which directly produce the endpoints JSON as a single-row scalar value. API is then just a wrapper that authenticates, validates input and streams the DB’s JSON directly to the client.

  • No ORMs, no SQL generators etc.
  • All SQL is where it should belong: in the database
  • API does only single „CALL myfunc(…)“ db calls
  • A simple centralised error handler can accurately report errors from the database
  • No weird mixed row/json columns scanning into structs and re-marshalling everything to JSON
  • Codebase is collapsing to 20% (by LOCs)
  • Stored Procedures can use wonderfully declarative SQL code
  • Response times in the microseconds, even for multiple queries, all happens inside the DB

More side effects:

  • the data model can change and evolve without touching the API at all
  • Zero deploys mean zero downtime
  • the API application is so tiny, I could easily switch it to any programming language I want (yes, even Common Lisp) without worrying about available databases libraries, type mapping and rewriting tens of thousands of lines of intermixed language/SQL-code.

The general direction of the dev industry is heading in the opposite direction. More ORMs, more layers, more database abstraction. More weird proprietary cloud databases with each their own limited capabilities and query language.

So you tell me: Is it crazy? Is it wrong? Why do I have doubts despite everything working out beautifully?

Kovah, to PostgreSQL
@Kovah@mastodon.social avatar

Just use they said.

I just spent FORTY MINUTES trying to restore a database dump in Postgres. WHAT. THE. HELL?! 🤬

Why is modern tech so damn complicated?

vwbusguy, to PostgreSQL
@vwbusguy@mastodon.online avatar

Sometimes feels like the of . It has so many crazy features and yet I've never had comfortable muscle memory in it, so it always takes me longer to do about anything vs some flavor of (or , depending on which tract you followed to get here).

rolle, to IT
@rolle@mementomori.social avatar

:skull360: Heads up! I need to rescale the database drive soon. It’s growing and I want the disk to be able to hold more data. The current max size is about 40 GB and I will grow it to 500 GB to make it more future proof.

This means I have to shut down my instance for this operation to make sure there will be no data corruption. The downtime will be probably only mere minutes, but there will be visible downtime. I’ll let you know the exact time for this to take place.

Database operations always make me kinda anxious for some reason despite the fact there has not been a single issue with them for me in the past.

#MementoMoriSocial #MastoAdmin #SysOp #Servers #PostgreSQL

helge, to PostgreSQL
@helge@mastodon.social avatar

Did I mention that it would be really nice to have as an embeddable library similar to ? Size wise that should be fine, I think the whole PG daemon is just ~5MB.
I know that the architecture doesn't lean to it, but someone has to do this eventually! 🙂

louis, to PostgreSQL
@louis@emacs.ch avatar

I'm now almost through migrating PG to MySQL with Stored Procedures only. Ended up with 140 Stored Procedures. The insights I gained into the business domain are incredible.

Now there are some bigger challenges:

  1. How to test an API that literally has hundreds of different endpoints + parameter combinations against the new version
  2. How to transfer data of a 100GB+ sized PG to MySQL in a timely manner so that downtime is reduced to < 15 minutes.
  3. Or even more challenging: how to transfer 60 PG tables to MySQL with a "slightly" optimised schema and a buggy pg_dump exporter, that wrongly decodes JSON values into unreadable data (bug filed 2015, maintainers not interested)? Or a buggy PG_MySQL Foreign Data Wrapper that fails with Boolean and JSON columns (bug filed in 2020, maintainers not interested)?

I've tried 10 different tools that advertise themself as a solution to this and not a single one was able to overcome these challenges (issues with JSON, Timestamp and Boolean columns). Any hints?

So if "interoperability" is a goal of the SQL standard, it clearly failed. If "interoperability" is a benchmark for open source databases, Postgres doesn't shine at all. All the features that make Postgres "so good" (like ARRAYs which are unknown to every other SQL database, BOOLs and Custom Types) are in fact locking your project in like forever.

However, I'm not the one who gives up easily. I'll likely end up with a hand-rolled migration tool and then sell it to make a fortune off it, for all those non-existing devs who want to migrate away from Postgres. :neofox_evil:​

#sql #mysql #postgresql

the_halmaturus, to php Esperanto

Need some help to get a PHP / Postgresql thing fixed.

I am running on debian 12 and have installed postgresql, php and pdo psql. Additionally I created the needed user and database for ulogger.

I can connect with the user / pw with dbeaver and see the empty db.

Unfortunately PHP does not believe it can handle the Postgresql DB and claims in the Error message: "could not find driver"

Packages installed are:
• apt install postgresql
• apt install php
• apt install php-json
• apt install php-fpm/stable
• apt install php-pgsql/stable
• apt install postgresql postgresql-client

Any idea what I could have done wrong or which step is missed?

Thanks in advance, for any help on this question.

​:boosts_ok:​

timwilson, to php

I've got a PHP website with a PostreSQL backend. It's entirely read-only in production, and its largest table has about 10,000 rows. Postgres, PHP, and a Caddy proxy all run in separate Docker containers.

Is it crazy to think that a simple system like this would run just fine with SQLite instead? In the longer term I’d like to move the whole thing to running with the Django Rest Framework and rework the front-end bit entirely.

xocolatl, to postgres

Hey people.

Do you use ±infinity in your dates and timestamps? What is your use case? What would you do instead if they weren't available?

Please boost for reach.

kubikpixel, to webdev German
@kubikpixel@chaos.social avatar

Am I the only who has never heard of ? 🤔

Of course I know and use for databases but I have never heard of SPARQL and that there are also solutions ⚙️

What do you think, should I learn and use it over in a project or do I understand it wrong? 🤷‍♂️

👉 https://en.wikipedia.org/wiki/SPARQL

fell, to PostgreSQL
@fell@ma.fellr.net avatar

God I hate so much...

  1. I have to jump hoops to manage my server, because my Plesk license doesn't cover PostgreSQL.
  2. It keeps annoying me about buying shit.
  3. It defaults to and supports only as a caching proxy in front of Apache.
  4. Said NGINX support is broken af, making you click options in a specific order for them to work at all.

I want to get rid of it so bad but I don't want to set up all those servers from scratch 😭

einenlum, to linux

Anyone for a good alternative to on ? I'm excluding MySQLWorkbench, DBeaver, PHPMyAdmin and Adminer.

Any GUI that does the job well and does not crash?

I want to be able to use

paulox, to fediverse
@paulox@fosstodon.org avatar

Dear , I ask you for 🌌

Recommend me some to buy on 📚
(language)

()




I'd like to have 🪪
• author
• title
• suggestion reason
• purchase URL

I prefer (more than Amazon and content with DRM), but is fine if there are no alternatives ✨

Thank you 🙏

P.S. It's fine to propose books that you've written 👍

P.P.S. please 🔁

mackuba, to postgres
@mackuba@martianbase.net avatar

Anyone here used both and a lot and could tell me how they compare for larger DBs? Mostly in terms of performance, assuming I don't need some advanced features - talking about 10s or 100s of GBs, a lot of writes but not that many reads. 🤔

louis, (edited ) to PostgreSQL
@louis@emacs.ch avatar

What is your opinion on PRIMARY KEYs for database tables that are append-only (i.e. for logging) and have no natural primary key and are not referenced?

#sql #postgresql #sqlite #mysql #SQLServer #oracle

jamie, to PostgreSQL
@jamie@zomglol.wtf avatar

I learned more about GPT-based AI algorithms from this article than from literally everything else I've read about it so far combined. It implements a GPT algorithm using .

https://explainextended.com/2023/12/31/happy-new-year-15/

louis, to PostgreSQL
@louis@emacs.ch avatar

I try really hard to like denormalization of data in document databases.

I also try really hard to like relational SQL for deeply nested data with many relations.

Today I failed on both. Suggestions explicitly welcome.

wiredprairie, to programming

I'm enjoying the ease of use of the npm package 'postgres'. I'd tried a few other options that were a bit more "ORM" and not had great success.

It's not simple -- but that's good as it's feature set is definitely robust. Also works well with Typescript.

It took me about 45 minutes to swap from Kysely.

https://www.npmjs.com/package/postgres

#NodeJS #JavaScript #Deno #Postgresql #ECMAScript #TypeScript #NPM

thisismissem, to PostgreSQL
@thisismissem@hachyderm.io avatar

Anyone have a good guide handy on PostgreSQL transactions encountering deadlocks and being aborted?

jrefior, (edited ) to PostgreSQL
@jrefior@hachyderm.io avatar

Can anyone recommend a book? Audience is a small business owner with software development experience who will be managing his own instance for a web app until there’s enough revenue to hire someone to help with that. On version 14 if that matters

engkiosk, to PostgreSQL German
@engkiosk@podcasts.social avatar

Oooooh, ein Elefant 😍

theory, to PostgreSQL
@theory@xoxo.zone avatar

I'm giving a talk on the "State of the Extension Ecosystem”, this Wednesday at noon eastern time (17:00 UTC). Should be fun!

https://justatheory.com/2024/03/state-of-the-extension-ecosystem/

K9MAX, to PostgreSQL German

Running pg_dump produced the following error message:
pg_dump: detail: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
pg_dump: detail: Command was: COPY public.preview_cards (id, url, title, description, image_file_name, image_content_type, image_file_size, image_updated_at, type, html, .... TO stdout;

Dump File has been created but obviously without the preview card table. Any idea what causes the above error?

DiazCarrete, to haskell
@DiazCarrete@hachyderm.io avatar

The library lets you construct queries using a monadic interface.

Interesting bit: "Rel8 has a fairly unique feature in that it’s able to return not just lists of rows, but can also return trees."

https://rel8.readthedocs.io/en/latest/cookbook.html#tree-like-queries

image/png

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • thenastyranch
  • magazineikmin
  • ethstaker
  • InstantRegret
  • tacticalgear
  • rosin
  • love
  • Youngstown
  • slotface
  • ngwrru68w68
  • kavyap
  • cubers
  • DreamBathrooms
  • megavids
  • mdbf
  • modclub
  • GTA5RPClips
  • normalnudes
  • khanakhh
  • everett
  • cisconetworking
  • osvaldo12
  • anitta
  • Leos
  • Durango
  • tester
  • JUstTest
  • All magazines