The epitome of the 2024 tech company would be a 100% RTO to an office that's been emptied by layoffs, so that employees can collaborate with their chatbot in person.
"Data-driven" approaches to decision-making always steer us astray. Measured solely by how many people a car transports, this fold-out seat would be a brilliant feature that got its PM promoted.
The true power of #genAI is not technological, but rhetorical: almost all conversations about it are about what executives are saying it will do "one day" or "soon" rather than what we actually see (and of course no mention of business model which doesn't exist).
We are told to simultaneously believe AI is so "early days" as to excuse any lack of real usefulness, and that it is so established - even "too big to fail" - that we are not permitted to imagine a future without it.
We are told #genAI is going to change everything...
...but every #AI "use case" so far has direct and transparent lineage from existing malignant practices.
Offshoring, algo-washing, information warfare, plagiarism, content mills and SEO spam, phishing/impersonation, revenge porn, hoodwinking investors, avoiding responsibility for management decisions, and blindly copy-pasting code from Stack Overflow.
Seeing some people read the news about #Boeing and #SpaceX and concluding that Elon Musk should build planes - which makes me wonder if they've ever heard of #Tesla
Trying to improve the wrong dimension of #UX will only lead to waste. Learn the difference:
Wonky products are confusing; their mental model doesn't match the user's. Janky products are conceptually fine, but buggy or inconsistent.
Jank manifests only at the hi-fi stage of development, and can be solved by UI redesign or backend optimization. However, wonk must be caught at the conceptual stage with lo-fi tools. You will NOT be able to fix it on the #UI layer.
It's easy to say #Agile has failed - but it hasn't. Rather, the entire framework of "building good software" has failed. Our leaders consistently set goals like "do Agile" rather than "build good software" because it's a lot easier to set the former goal rather than the latter.
But there is an analogous domain that shows us an example of how to build a better relationship with our tools.
This is your brain on #ai: "Subject to the terms of this Agreement, You hereby grant to HP a non-exclusive, worldwide, royalty-free right to use, copy, store, transmit, modify, create derivative works of and display Your non-personal data for its business purposes."
Someone out there wrote "as a user I want my printer to steal my documents to train LLMs" without hesitation.
I use a great prompt-driven interface for generating images. It's much faster than Midjourney (produces millions of results in a fraction of a second).
There's a toxic thread within #UX#design - that only we can possibly know what customers want, and anyone who disagrees is a bad guy that we need to beat up to move forward. But when the Danish Design Ladder asks us to embrace Design-as-Culture, it also means to treat our internal customers with the same respect as our external ones.
Designers can take a massive leap forward in how we work - if only we can let go of our egos and trust our partners to do the same.
Artifacts are only valuable when they help achieve something: disseminate/explain an idea, convince people, etc.
Don't be afraid to ask how your outputs will be used. If there's no clear purpose beyond "we've always asked for X" then you will never be seen as an equal partner contributing value.
Now that we have #LLM for generating content and visuals on a whim that nobody will look at again. It is not necessary for humans to continue doing it.
There's no such thing as a flat org - just an org with a visible hierarchy, vs a secret one. Every "flat org" has its designated leaders and ditch-diggers, except the only way to tell which one you are is when you try to do something and are told to knock it off.
We need an equivalent of "this is not financial advice" for design: "this is not based on research."
I've seen countless well-meaning teams say "we will do it this way and then come back to this decision" - and they almost never do. That's because unless you carefully document decision provenance, coming back to the right decisions is impossible.
Humans have something called "source amnesia" - our brains forget where we learned something but remember the "fact" itself, even if the "fact" was actually an assumption, a guess, or an AI output that we were going to "validate."
As a result, the semantic environment becomes polluted with "common knowledge" that's really nothing more than assumptions repeated often enough.