@oliver@phpc.social avatar

oliver

@oliver@phpc.social

Earning a living with #PHP since 2005. In love with modernizing legacy codebases, excited by mission critical features.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

davidbisset, to space
@davidbisset@phpc.social avatar

Why I ๐Ÿงก the web:

https://iss.matteason.co.uk/

VERY slick looking 3D tracker with live video and near-realtime clouds.

nikitonsky, to random
@nikitonsky@mastodon.online avatar

Thatโ€™s called work. Thatโ€™s the definition of work

grmpyprogrammer, to random
@grmpyprogrammer@phpc.social avatar

Thinking about the AI skill threat:

$100/hr to fix your stuff
$200/hr if you did it with AI

davidbisset, to programming
@davidbisset@phpc.social avatar

Developers you cannot complain about debugging your code unless you work at NASA. New rule. https://mastodon.social/@jimray/112316533559894202

V0ldek,

Surely the task of reviewing something written by an AI that canโ€™t be blindly trusted, a task that basically requires you to know what said AI is โ€œsupposedโ€ to write in the first place to be able to trust its outpu, is bound to always be simpler and result in better work than if you sat down and wrote the thing yourself.

This is only semi-related but.

When I quit Microsoft last year they were heavily pushing AI into everything. At some point they added an automated ChatGPT nonsense โ€œsummaryโ€ to every PR you opened. First itโ€™d edit the description to add its own take on the contents, and then itโ€™d add a review comment.

Anyone who had to deal with PR review knows it can be frustrating. This made it so that right of the bat you would have to deal with a lengthy, completely nonsensical review that missed the point of the code, asked for terrible โ€œimprovementsโ€, or straight up proposed incorrect code.

In effect it made the process much more frustrating and time-consuming. The same workload was there, plus you had to read an equivalent of a 16-year-old who thinks he knows how software works explain your work to you badly. And since itโ€™s a bona fide review comment, you have to address it and close it. Absolutely fucking kafkaesque.

Forcing humans to read and cleanup AI regurgitated nonsense should be a felony.

mhoye, to random
@mhoye@mastodon.social avatar

"Tesla warns owners that opening their doors or windows while installing a software update could damage the vehicle".

What do you even say, aside from "never get into a Tesla".

https://gizmodo.com/tesla-software-update-traps-woman-in-hot-car-1851407234

TechConnectify, (edited ) to random
@TechConnectify@mas.to avatar

NOTE TO LITERALLY THE ENTIRE INTERNET:

No thought is ever complete. There will always be holes to fill!

RESIST THE URGE TO FILL THOSE HOLES

If someone has said something which you largely agree with, especially if what they said aligns with your understanding of and goals towards an issue, then the LAST thing they need from you is someone playing devil's advocate. Nitpicking is annoying at best and, in a world where people are looking to drive wedges between allies, dangerous at worst.

1/2

TechConnectify, (edited )
@TechConnectify@mas.to avatar

If someone appears to be on your side broadly but hasn't specifically addressed your pet issue, that doesn't mean they don't know about your issue! They just haven't explicitly mentioned it.

If you'd like, take a "yes, and" approach. Increasing awareness of issues is always good (so long as they're real issues, of course).

Never, under any circumstances, should you take the omission of your fixation to mean they aren't on your side.

2/2

jeze, to mastodon
@jeze@kzoo.to avatar

I heavily suggest that if you've tried @trunksapp in the past and found it not to your liking you go back and give it another shot. It's recently had transformation of sorts with tons of UX and performance polish. @Decad3nce has been hard at work!

https://play.google.com/store/apps/details?id=com.decad3nce.trunks

manchuck, to github
@manchuck@phpc.social avatar

GitHub suggested this for the DB Name. I wonder if I can find the rest of the connection string. Also, let this be a lesson is making sure you do not commit sensitive information to

george, to random
@george@phpc.social avatar
Crell, to php
@Crell@phpc.social avatar

Oh HELL yes!

"We propose to update and modernize the English PHP documentation, review and remove user comments, integrate 3v4l.org for interactive examples, and simplify the maintenance process. This will make PHP more accessible to new developers and serve as a reliable reference for experienced ones."

https://thephp.foundation/blog/2024/02/26/transparency-and-impact-report-2023/

scy, to random
@scy@chaos.social avatar

"Create your account to continue reading."

yeah how about i just close the tab instead

samir, to random
@samir@functional.computer avatar

It took me a long time but I finally figured out that Copilot and friends are not meant for me, theyโ€™re meant for JavaScript programmers.

One of the reasons Iโ€™m enjoying Rust is because thereโ€™s a culture of being very thoughtful about every line of code you write, and every dependency you add.

Conversely, in JS land, โ€œjust add this dependencyโ€ is now a meme.

In a land where youโ€™re encouraged to โ€œjustโ€ add more boilerplate (thanks, create-react-app), why wouldnโ€™t you outsource that?

samir,
@samir@functional.computer avatar

Itโ€™s a trap. More code is not your friend, itโ€™s your enemy.

Donโ€™t add the dependency. Donโ€™t generate 8000 lines of JavaScript. Write more CSS. Donโ€™t accept the digital grey goo spewed out of tools such as Copilot.

Code is a liability. Anything that helps you make more, quickly, with no effort, is leading you down a very dark path.

FractalEcho, to ChatGPT
@FractalEcho@kolektiva.social avatar

The racism in chatGPT we are not talking about....

This year, I learned that students use chatGPT because they believe it helps them sound more respectable. And I learned that it absolutely does not work. A thread.

A few weeks ago, I was working on a paper with one of my RAs. I have permission from them to share this story. They had done the research and the draft. I was to come in and make minor edits, clarify the method, add some background literature, and we were to refine the discussion together.

The draft was incomprehensible. Whole paragraphs were vague, repetitive, and bewildering. It was like listening to a politician. I could not edit it. I had to rewrite nearly every section. We were on a tight deadline, and I was struggling to articulate what was wrong and how the student could fix it, so I sent them on to further sections while I cleaned up ... this.

As I edited, I had to keep my mind from wandering. I had written with this student before, and this was not normal. I usually did some light edits for phrasing, though sometimes with major restructuring.

I was worried about my student. They had been going through some complicated domestic issues. They were disabled. They'd had a prior head injury. They had done excellent on their prelims, which of course I couldn't edit for them. What was going on!?

We were co-writing the day before the deadline. I could tell they were struggling with how much I had to rewrite. I tried to be encouraging and remind them that this was their research project and they had done all of the interviews and analysis. And they were doing great.

In fact, the qualitative write-up they had done the night before was better, and I was back to just adjusting minor grammar and structure. I complimented their new work and noted it was different from the other parts of the draft that I had struggled to edit.

Quietly, they asked, "is it okay to use chatGPT to fix sentences to make you sound more white?"

"... is... is that what you did with the earlier draft?"

They had, a few sentences at a time, completely ruined their own work, and they couldnt tell, because they believed that the chatGPT output had to be better writing. Because it sounded smarter. It sounded fluent. It seemed fluent. But it was nonsense!

I nearly cried with relief. I told them I had been so worried. I was going to check in with them when we were done, because I could not figure out what was wrong. I showed them the clear differences between their raw drafting and their "corrected" draft.

I told them that I believed in them. They do great work. When I asked them why they felt they had to do that, they told me that another faculty member had told the class that they should use it to make their papers better, and that he and his RAs were doing it.

The student also told me that in therapy, their therapist had been misunderstanding them, blaming them, and denying that these misunderstandings were because of a language barrier.

They felt that they were so bad at communicating, because of their language, and their culture, and their head injury, that they would never be a good scholar. They thought they had to use chatGPT to make them sound like an American, or they would never get a job.

They also told me that when they used chatGPT to help them write emails, they got more responses, which helped them with research recruitment.

I've heard this from other students too. That faculty only respond to their emails when they use chatGPT. The great irony of my viral autistic email thread was always that had I actually used AI to write it, I would have sounded decidedly less robotic.

ChatGPT is probably pretty good at spitting out the meaningless pleasantries that people associate with respectability. But it's terrible at making coherent, complex, academic arguments!

Last semester, I gave my graduate students an assignment. They were to read some reports on labor exploitation and environmental impact of chatGPT and other language models. Then they were to write a reflection on why they have used chatGPT in the past, and how they might chose to use it in the future.

I told them I would not be policing their LLM use. But I wanted them to know things about it they were unlikely to know, and I warned them about the ways that using an LLM could cause them to submit inadequate work (incoherent methods and fake references, for example).

In their reflections, many international students reported that they used chatGPT to help them correct grammar, and to make their writing "more polished".

I was sad that so many students seemed to be relying on chatGPT to make them feel more confident in their writing, because I felt that the real problem was faculty attitudes toward multilingual scholars.

I have worked with a number of graduate international students who are told by other faculty that their writing is "bad", or are given bad grades for writing that is reflective of English as a second language, but still clearly demonstrates comprehension of the subject matter.

I believe that written communication is important. However, I also believe in focused feedback. As a professor of design, I am grading people's ability to demonstrate that they understand concepts and can apply them in design research and then communicate that process to me.

I do not require that communication to read like a first language student, when I am perfectly capable of understanding the intent. When I am confused about meaning, I suggest clarifying edits.

I can speak and write in one language with competence. How dare I punish international students for their bravery? Fixation on normative communication chronically suppresses their grades and their confidence. And, most importantly, it doesn't improve their language skills!

If I were teaching rhetoric and comp it might be different. But not THAT different. I'm a scholar of neurodivergent and Mad rhetorics. I can't in good conscious support Divergent rhetorics while supressing transnational rhetoric!

Anyway, if you want your students to stop using chatGPT then stop being racist and ableist when you grade.

SpeakerToManagers, to random
@SpeakerToManagers@wandering.shop avatar

Air Canada claims chatbot, not company, is legally responsible for false information. Court disagrees.

Air Canada must honor refund policy invented by airlineโ€™s chatbot - Ars Technica https://apple.news/AHHtGvJdHS4-N7lZIaldLVw

sarah, to php
@sarah@phpc.social avatar

I'm working on something special for the PHP community!

It's called PHP For Hire, and it's a talent directory for developers who are looking for work.

Interested? I'm opening it to private beta right now, and hoping to do a wider launch once there are 10-15 people in the directory. Reach out to me here for the link and be one of the first!

SynAck, to random
@SynAck@corteximplant.com avatar

In my opinion, the tech requirement that every engineer at any level be a "full stack" developer is a fraud and a bald-faced exploitation of the worker. And a lot of tech workers have blindly bought into it as "just part of the job". But let me be clear:

  • UX design is a full time job
  • UI/front-end dev is a full time job
  • API/backend dev is a full time job
  • Database administration is a full time job
  • QA and testing is a full time job
  • Deployments/sysadmin is a full time job

Each of these things has their own set of mindsets, disciplines, and tools that need to be thoroughly understood in order to work properly, and their goals are often at odds with each other in implementation.

I believe that this "full stack developer" malarkey is why we have so many buggy, insecure, and inefficient apps and web sites. Devs are forced to be "jacks of all trades, master of none" just to keep their job.

And the ludicrously high salaries (at least here in the US anyway) aren't fooling anyone. They're a smokescreen to keep the worker blind to the fact that they're doing the jobs of 2, sometimes 3, other people all so the company doesn't have to hire those 2 or 3 butts to fill the seats.

It's a leftover relic from the "startup scramble" days that has become the norm, and I'm calling bullshit. The emperor is naked, friends.

hugoestr, to random
@hugoestr@functional.cafe avatar

PHP is a remarkable language. The DSL for web pages. The little template language that could. It is the web version of AWK: a language crafted for its task, making them easier than using a general programming language.

Also great documentation. I am grateful to those who made PHP so learnable

knavalesi, to poland
@knavalesi@mastodon.social avatar
draconigen, to random
@draconigen@packmates.org avatar

Totally didn't do this while slacking off another project.

trunksapp, to random
@trunksapp@mastodon.social avatar

1 year of trunks!

Thank you to everyone for your feedback, excitement, and contributions for this little hobby project.

Sometimes I think I'm in over my head for building an iOS, Android and Web experience that can also adapt to large screens. ๐Ÿ˜… I'm not sure I could continue without your kind words.

On to another year and more whimsy.

๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰

revengeday, to random
@revengeday@corteximplant.com avatar

What? :confusedlucy:โ€‹

dansahagian, to random
@dansahagian@fosstodon.org avatar

Small paid contracting gig available.

Looking for someone to update and fix up the front end of a small hobby project I run. Itโ€™s HTML & CSS in Django templates. I can handle the template bits, but need help on design, HTML, and CSS. Please boost for visibility.

Happy to talk here or can email
dan [at] website in my bio

Thanks!

andrea, to random
@andrea@ubuntu.social avatar

๐Ÿค” That must be a local train

kellylepo, to random
@kellylepo@astrodon.social avatar

A trailer for a new scientific visualization of the Pillars of Creation in the Eagle Nebula. Full video coming soon.

https://youtu.be/Slx91ASCiXw?si=EiV-qTaZsgh9bIQ2

  • All
  • Subscribed
  • Moderated
  • Favorites
  • โ€ข
  • anitta
  • rosin
  • InstantRegret
  • ethstaker
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • thenastyranch
  • Youngstown
  • GTA5RPClips
  • slotface
  • Durango
  • khanakhh
  • kavyap
  • megavids
  • everett
  • vwfavf
  • Leos
  • osvaldo12
  • cisconetworking
  • cubers
  • modclub
  • ngwrru68w68
  • tacticalgear
  • provamag3
  • normalnudes
  • tester
  • JUstTest
  • All magazines