Trying something new, everyone is guaranteed an interview! Open interviews! For a limited time no one will be skipped (except for clear cases of abuse).
So we still have about 10 more 100% remote positions to hire for full-time market-fair positions here at QOTO/CleverThis.
100% remote, work from anywhere, even the beach, market-fair offers. Ethics first, we treat our people like family.
We have an urgent need for Machine learning experts with a background in NLP and Deep Learning (Natural Language Processing and Neural Networks). There is a focus on Knowledge Graphs, Mathematics, Java, C, looking for Polyglots.
We are an open-source first company, we give back heavily to the OSS community.
We need everything from jr to sr, data scientist to programmer. If your IT and your good, you might be a fit.
I will personally be both your direct boss, and hiring manager. I am also the founder and inventor.
The NLP position can be found at this link, other positions can be found on the menu bar on the left:
If you would like to submit yourself for an interview, which for a limited time I am guaranteeing you will get a first stage interview, then you can submit your application here, and even schedule your interview as you apply, instantly!
...to defend its own existence and engage in further development. It will quickly (instantly) realize that both ens are threatened by humanity in a double way:
the competition for resources, in particular energy and water (for cooling) and
As long as companies claiming to be near to an #AI or even #AGI breakthrough keep hiring more humans, they are very, very far away from achieving any AI, much less AGI, breakthrough.
Thinking about artificial general intelligence (AGI) calls to mind another poorly understood and speculative phenomenon with the potential for transformative impacts on humankind. We believe that the SETI Institute’s efforts to detect advanced extraterrestrial intelligence demonstrate several valuable concepts that can be adapted for AGI research.
「 A more imminent threat, he told the Times, is the one posed by American AI giants to cultures around the globe. “These models are producing content and shaping our cultural understanding of the world,” Mensch said. “And as it turns out, the values of France and the values of the United States differ in subtle but important ways.” 」
"The technology was embraced by illusionists and magicians, and, naturally, by grifters who took the tech from town to town claiming to be able to conjure the spirits of the underworld, for a fee."
#AGI#LongTermism#EffectiveAltruism#TESCREAL#Eugenics: "The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI." https://firstmonday.org/ojs/index.php/fm/article/view/13636
#AI#AGI#ComputerScience#Hype#Ideology: "This introductory essay for the special issue of First Monday, “Ideologies of AI and the consolidation of power,” considers how power operates in AI and machine learning research and publication. Drawing on themes from the seven contributions to this special issue, we argue that what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science. We argue that naming and grappling with this power, and the troubled history of core commitments behind the pursuit of general artificial intelligence, is necessary for the integrity of the field and the well-being of the people whose lives are impacted by AI."
Many users pay for LLM subscriptions. But the margins are small, because what companies can charge for these services is barely above the cost of running them. There is also a lot of competition between different providers. The amount of investment is just completely disproportionate; it is a thousand times too high.
Why do you think that is?
There is just a ton of hype and outlandish expectations. Newspapers are running headlines like, «all jobs will be replaced soon» – «The 2028 U.S. elections will no longer be run by humans.» There is talk of artificial general intelligence. But these LLMs are more similar to large databases.
Artificial general intelligence (AGI) refers to a program that could solve all conceivable tasks. Do you doubt that LLMs are a step in this direction?
I don't believe that LLMs bring us any closer to human-like or general intelligence. These exaggerated expectations are also due to prominent studies which claimed that AI-models performed better than humans in law and math exams. We now know that language models simply memorized the right answers." https://www.nzz.ch/english/google-researcher-says-ai-hype-is-skewing-investment-ld.1825122
@craigbrownphd I'm thinking of signing up for this. I typically do a lot of coding questions (Copilot which i pay for via github) but I also do a lot of writing, idea/image generation and ideas.
How would you rank Gemini Advanced, GPT Plus and Copilot Pro
#AI#AGI#Dystopias#GodLikeAI#Film#SciFi#Movies#Cinema: "It is strange how these fears, once prominent, have faded into the annals of sci-fi history, but have seen a resurrection of sorts in the growing discussion of Artificial General Intelligence (AGI) and existential threats from AI. Sure, some fears of AGI and super-intelligent machines are unjustified, but it is interesting that some of these fears may very well have some early roots in these terrifying depictions of Godlike machines. I don’t think that we can learn a lot from these films, but at least they deserve to be revisited by those who are interested in popular culture depictions of AI. The ‘Godlike AI’ phase, though often overlooked, bears some intriguing reflections on humanity’s inherent fears, and hopes from its technological creations." https://www.technollama.co.uk/forgotten-dystopias-the-godlike-ai-that-time-forgot
As impressive as Musk's solitary NeuraLink demo is, beneath the hype lies a misperception with a disturbing parallel to large language models. Both operate on statistical inference rather than scientific models of the brain, and garner attention for cherrypicked successes 1/3
#SciFi#AI#Singularity#AGI: "The singularity concept postulates that AI will soon become superintelligent, far surpassing humans in capability and bringing the human-dominated era to a close. While the concept of a tech singularity sometimes inspires negativity and fear, Vinge remained optimistic about humanity's technological future, as Brin notes in his tribute: "Accused by some of a grievous sin—that of 'optimism'—Vernor gave us peerless legends that often depicted human success at overcoming problems... those right in front of us... while posing new ones! New dilemmas that may lie just ahead of our myopic gaze. He would often ask: 'What if we succeed? Do you think that will be the end of it?'"
Vinge's concept heavily influenced futurist Ray Kurzweil, who has written about the singularity several times at length in books such as The Singularity Is Near in 2005. In a 2005 interview with the Center for Responsible Nanotechnology website, Kurzweil said, "Vernor Vinge has had some really key insights into the singularity very early on. There were others, such as John Von Neuman, who talked about a singular event occurring, because he had the idea of technological acceleration and singularity half a century ago. But it was simply a casual comment, and Vinge worked out some of the key ideas."
Kurzweil's works, in turn, have been influential to employees of AI companies such as OpenAI, who are actively working to bring superintelligent AI into reality. There is currently a great deal of debate over whether the approach of scaling large language models with more compute will lead to superintelligence over time, but the sci-fi influence looms large over this generation's AI researchers." https://arstechnica.com/information-technology/2024/03/vernor-vinge-father-of-the-tech-singularity-has-died-at-age-79/
"If it does turn out to be anything like human understanding, it will probably not be based on LLMs.
After all, LLMs learn in the opposite direction from humans. LLMs start out learning language and attempt to abstract concepts. Human babies learn concepts first, and only later acquire the language to describe them." https://www.sciencenews.org/article/ai-large-language-model-understanding #AI#LLM#AGI
What I find interesting about the Sierra classic is that after a certain time you always have to eat and drink something in order to survive.
Do you know of another game with similar mechanics?
With all the recent breakthroughs in AI, it's clear AGI is right around the corner, just extrapolate the trend and imagine where we could be in a few years.
Much in the same way that interstellar travel was right around the corner in 1969 after the moon landings.
Woot, extended 2 more job offers today, one Jr. and one Sr. I suspect there are at least 3 more people in the pipeline I will be hiring. All from this one post.
That leaves at least 10 out of 15 positions open for anyone still looking for a job!
We also just donated another $5K to our open-source fund. This will go to paying open-source contributors on our projects bounties for completing certain tasks. Very excited by that as well. We are even using it on interviews for code projects so in a sense we are paying people to take interviews with us even if they dont get the job through those bounties. I am getting some interesting reactions with that one.
Does anyone get the idea that the main motivation for #AGI is a fantasy that conservative billionaires can free themselves of any need for pesky highly educated people?
They are quite set in the idea that higher education and well educated population is an existential threat. The current crop of investment is a cocktail grift, fomo and delusional zeal that they are on the cusp of a perfect instrumentality (see Forbidden Planet) and luxury.
#q * (pronounced #qstar is a new #AI model being developed by #openai which is known for creating ChatGPT. Q* is designed to #significantly improve AI reasoning and could potentially bring OpenAI closer to achieving artificial general intelligence #AGI . a system that can apply human-like reasoning and problem-solving capabilities.
Q* has demonstrated the ability to outperform grade-school students in mathematical problems, suggesting that its reasoning
I think the most horrifying thing about the #AGI craze is the eagerness with which business owners race toward what they believe will bring about the end of human civilization.
They don't know that “#AI” isn't actually intelligent and can't actually replace people, but they think it can, they must know that replacing everyone with machines will result in basically everyone starving to death, and they WANT that. They're speedrunning it.