Comments

This profile is from a federated server and may be incomplete. Browse more on the original instance.

CubitOom, to 196 in Trumpet rule

I just say search, but Kagi brought me to this.

earthtouchnews.com/…/mouth-breathing-dolphin-make…

CubitOom, to news in Dow hits 40,000 for the first time as bull market accelerates

I don’t understand why there is a bull market.

Wouldn’t the latest CPI report mean that the FED is less likely to lower interest rates which in turn would mean the high APY cash accounts are going to stay in effect for longer? Meaning a 5% APY on liquid cash without risk.

The only reason I can think of is that Boomers are trying to maximize their retirement funds and not reading anything, not even headlines.

But this wouldn’t take into account the large banks and firms that are really leading the bull run.

Is this really just because of the idea that there is a potential for “AI” to increase productivity?

None of it makes sense to me, but I’m not an economist.

CubitOom, to 196 in Trumpet rule

Trumpeters often don’t use their nose to play.

CubitOom, to funny in Tuba

I want that drip tho

CubitOom, to tenforward in Gurney, his pug defiant!

Gurney, his mullet sweet!

CubitOom, to canada in Got to do something over the winter

I just used kagi to search for the conversion, and thought the long decimal was funny.

But now that I think of it, does Canada make it’s own 4 L jugs so they can be accurately advertised or do they just use the US 1 gal jugs and call it a 4 L out of convenience but then write in fine print on the bottom that it’s actually 3.79 L?

Unless that is actually a 4L jug of vodka, couldn’t someone sue for misrepresenting the amount of product being sold?

Someone’s liquid here is probably not precise. And I’m going to guess it’s the one claiming to be a larger volume with an additional manufacturing cost.

CubitOom, to canada in Got to do something over the winter

No…that would be insane. 1 US gallon is only 3.78541178 liters.

CubitOom, to android in PSA: Nova Launcher is owned by an analytics company

This seems awesome, testing it out.

CubitOom, to canada in Got to do something over the winter

Just wanted to clarify something here about Canadians… So you put your milk in bags but your vodka goes into milk jugs?

CubitOom, to europe in European Police Chiefs call for industry and governments to take action against end-to-end encryption roll-out

I guess that means that those same police chiefs don’t use any end to end encryption whatsoever.

CubitOom, to technology in MKBHD - Do Bad Reviews Kill Companies?

In today’s market, the perception or even the profitability of a product means nothing. All that actually matters is growth.

For a publicly traded company, or even one that just uses venture capital to start up; the product isn’t the thing that they might sell to consumers, it’s their brand. This is what gives them more capital to continue running the company and ultimately to profit.

This means that a company no longer needs to make good products, they don’t need to keep customers happy, they don’t even need to be profitable. All they need is to show growth opportunities to potential investors.

CubitOom, to futurology in Evidence is growing that LLMs will never be the route to AGI. They are consuming exponentially increasing energy, to deliver only linear improvements in performance.

Maybe it’s this arbitrary word, hallucination? Which was recently borrowed from the human experience to explain why something which normally is factual like a computer is not computing facts.

But if one were to think about it, what is the difference between a series on non factual hallucinations in a model and a person’s individual experience of the world?

  • If two people eat the same food item they might taste different things.
  • they might have different definitions of the same word.
  • they might remember that an object was a different color then someone’s recording could prove. There is a reason why eye witness testimony is considered unreliable in the court of law.

Before, we called these bugs or even issues. But now that it’s in this black box of sorts that we can’t alter the decision making process of as directly as before. There is this more human sounding name all of a sudden.

To clarify, when an llm gets a fact wrong because it has limited context or because it’s foundational model is flawed, is that the same result as the experience someone has after consuming psychedelic mushrooms? No, I wouldn’t say so. Nor is it the same when a team of scientists try to make a model actively hallucinate so they can find new chemical compounds.

Defining words can sometimes be very tricky, especially when they are applying to multiple areas of study. The more you drill into a definition, the more it becomes a metaphysical debate. But it is important to have these discussions because even the definition of something like AGI keeps changing. And infact only exist because the goal posts for a AI moved so much. What will stop a company which is trying to attract investors from just slapping an AGI label on their next release? And how will we differentiate what the spirit of the word is trying to convey from the sales pitch?

CubitOom, to futurology in Evidence is growing that LLMs will never be the route to AGI. They are consuming exponentially increasing energy, to deliver only linear improvements in performance.

Sure there is intentional creative thought. But there are also unintentional creative thoughts. Moments of clarity, eureka moments, and strokes of inspiration. How do we differentiate these?

If we were to say that it is because of our subconscious is intentionally promoting these thoughts. Then we would need a method to test that, because otherwise the difference is moot.

Similar to how one might define the I in AGI it’s hard to form a consensus on general and often vague definitions like these.

CubitOom, to lemmyshitpost in Total ecstasy

Not even a banana?

CubitOom, to futurology in Evidence is growing that LLMs will never be the route to AGI. They are consuming exponentially increasing energy, to deliver only linear improvements in performance.

I wonder where the line is drawn between an emergent behavior and a hallucination.

If someone expects factual information and gets a hallucination, they will think the llm is dumb or not helpful.

But if someone is encouraging hallucinations and wants fiction, they might think it’s an emergent behavior.

In humans, what is the difference between an original thought, and a hallucination?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • magazineikmin
  • Youngstown
  • khanakhh
  • ngwrru68w68
  • slotface
  • ethstaker
  • mdbf
  • everett
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • cisconetworking
  • rosin
  • JUstTest
  • Durango
  • GTA5RPClips
  • Leos
  • tester
  • tacticalgear
  • InstantRegret
  • normalnudes
  • osvaldo12
  • cubers
  • anitta
  • modclub
  • provamag3
  • lostlight
  • All magazines