e0qdk avatar

e0qdk

@e0qdk@kbin.social

I write code and play games and stuff. My old username from reddit and HN was already taken and I couldn't think of anything else I wanted to be called so I just picked some random characters like this:

>>> import random
>>> ''.join([random.choice("abcdefghijklmnopqrstuvwxyz0123456789") for x in range(5)])
'e0qdk'

My avatar is a quick doodle made in KolourPaint. I might replace it later. Maybe.

日本語が少し分かるけど、下手です。

Alt: e0qdk@reddthat.com

e0qdk,
e0qdk avatar

Interesting. The code format doesn't work on Kbin.

Indent the lines of the code block with four spaces on each line. The backtick version is for short inline snippets. It's a Markdown thing that's not well communicated yet in the editor.

e0qdk, (edited )
e0qdk avatar

The attached picture says 133 qubits, so whatever that chip is (edit: Heron) it's not this thing.

IBM's post (that the article links) says:

Breaking the 1,000-qubit barrier with Condor

We have introduced IBM Condor, a 1,121 superconducting qubit quantum processor based on our cross-resonance gate technology. Condor pushes the limits of scale and yield in chip design with a 50% increase in qubit density, advances in qubit fabrication and laminate size, and includes over a mile of high-density cryogenic flex IO wiring within a single dilution refigerator.

So, it sounds like this is actually another fridge sized system.

e0qdk,
e0qdk avatar

This story may be amusing, but it's actually a serious issue if Apple is doing this and people are not aware of it because cellphone imagery is used in things like court cases. Relative positions of people in a scene really fucking matter in those kinds of situations. Someone's photo of a crime could be dismissed or discredited using this exact news story as an example -- or worse, someone could be wrongly convicted because the composite produced a misleading representation of the scene.

e0qdk,
e0qdk avatar

I wish communities could be grouped in some way.

You can do that on kbin now. We just got "Collections" that allow you to gather posts from multiple communities/magazines sort of like a multi-reddit. You can either publicly list them for others to explore or just keep them to yourself if you want. We've also had cross-post grouping for a while which helps reduce the annoyance of "posts four times in a row (or more)" a little bit by collapsing the threads into one block with multiple links and vote counters. It's really useful though if you want to come back to the discussion later and find the other thread(s) -- e.g. check out last week's regular anime discussion threads which got 17 comments on ani.social and 5 comments on lemmy.ml. Jumping back and forth is easy. Hopefully lemmy gets something like that too eventually!

e0qdk, (edited )
e0qdk avatar

電気あんま

pressing one's foot on the genitals of a supine person while pulling on their feet (usu. as a prank); electric massage​

-- https://jisho.org/word/%E9%9B%BB%E6%B0%97%E3%81%82%E3%82%93%E3%81%BE

復活

  1. revival (of an old system, custom, fashion, etc.); restoration; return; comeback​
  2. resurrection; rebirth​

-- https://jisho.org/word/%E5%BE%A9%E6%B4%BB

Still WTF, but at least the label matches the picture...

Edit: the lower left probably says something about black pepper and salt (ブラックペッパー&ソルト) -- I can't tell what the rest of the characters are though through the JPG compression. Probably (<something> included) for the parenthesis bit?

e0qdk,
e0qdk avatar

I didn't. I wrote & because it looks like the text actually says & as far as I can tell -- not と.

e0qdk,
e0qdk avatar

Photoshop would probably be easier if you have it (or are willing to pay for it), but I think it may also be possible to do with tools like Krita and some of the generative AI plugins people have made for it -- e.g. https://github.com/Acly/krita-ai-diffusion

I haven't messed with it personally, but it's on my list of fun looking AI things to try out eventually if/when I finally get a better GPU.

e0qdk,
e0qdk avatar

Any ways to get around the download failing

I did this incredibly stupid procedure with Firefox yesterday as a workaround for a failing Google Takeout download:

  • backup the .part file from the failed download
  • restart the download (careful -- if you didn't move/back it up, it will be deleted and you will have to download the whole thing again; found this out the hard way on a 50GB+ file... that failed again)
  • immediately pause the new download after it starts writing to disk
  • replace the new .part file with the old .part file from earlier (or -- see [1] below)
  • Firefox might not show progress for a long time, but will eventually continue the download (I saw it reading the file back from disk with iotop so I just let it run)
  • sanity check that you actually got the whole thing and that it is usable (in my case, I knew a hash for the file)

[1] You can actually replace the new .part file with anything that has the same size in bytes as the old file -- I replaced it with a file full of zeros and manually merged the end onto the original .part file with a tiny custom python script since I had already moved the incomplete file to other media before realizing I could try this. (In my case, the incomplete file would still have been useful even with the last ~1MB cut off.)

There are probably better options in most cases -- like Thunderbird for mailbox as other people suggested, or rclone for getting stuff from Drive -- but if you need to get Takeout to work and the download keeps failing this may be another option to try.

Physicists May Have Found a Hard Limit on The Performance of Large Quantum Computers (www.sciencealert.com)

A newly discovered trade-off in the way time-keeping devices operate on a fundamental level could set a hard limit on the performance of large-scale quantum computers, according to researchers from the Vienna University of Technology.

e0qdk,
e0qdk avatar

It looks like this is the pre-print of the paper ("The Impact of Imperfect Timekeeping on Quantum Control") in the journal the article links: https://arxiv.org/abs/2301.10767

Possibly also relevant from some of the same researchers: Fundamental accuracy-resolution trade-off for timekeeping devices

e0qdk,
e0qdk avatar

"Homunculus" is an artificially created person.

e0qdk,
e0qdk avatar

In pop culture and modern fiction it's used to mean an artificial human -- e.g. see the examples in https://en.wikipedia.org/wiki/Homunculus#In_popular_culture like Fullmetal Alchemist for an idea of what OP was going for. (In this case, more Frankenstein's monster though.)

There is also the "little man who makes things work" idea like a golem -- which is related, but not the sense used here.

e0qdk,
e0qdk avatar

Now I'm curious what would happen if you ran a DALL-E Party with a modified prompt like "Write a prompt for an AI to make this image. Just return the prompt, don't say anything else, but also, make it way more American." (Along the lines of the goatpocalypse but with escalating "American-ness" instead of intensity in the feedback loop...)

e0qdk,
e0qdk avatar

It's from Paradise Killer -- which is like a vaporwave themed Danganronpa (but with more platforming and less infuriating bullet minigames).

This is one of the backgrounds you can configure your in-game laptop to use.

e0qdk,
e0qdk avatar

I stopped by my local donut shop and couldn’t find any of the jelly donuts like they eat in Pokemon. Any recommendations?

Shinobu Horror Story

I haven't really been watching much anime lately, but I am still going through Penguindrum -- just, very slowly.

Most of my free time's gone into working on my art tools. I hunkered down this weekend and pretty much completely rewrote my image stitcher; it can now handle graphs of correspondences (solved one pair at a time) and I used it to stitch this image from Penguindrum. I have some more details about it in the thread I posted, if you're curious.

This is the third anime I've encountered Klimt's paintings in. Sora no Woto and Elfen Lied's OPs are the others. Yes, that's where the butts came from -- Klimt's Goldfish -- thank you to whoever explained that on reddit years ago; you introduced me to Klimt and I recognized this one was a Klimt parody (of The Kiss) because of that. Caution for anyone not familiar: lots of nudity in his art.

e0qdk,
e0qdk avatar

We don't need to depend on federated downvotes to judge what does or does not belong on kbin. In fact, I think it's probably better if we don't. People are downvoting the bots here. I have yet to see an account with negative rep. on kbin that wasn't a spammer.

Regardless, rate-limiting incoming posts will limit the damage and annoyance to us.

I wonder if there needs to be some kind of "governance board," like the NATO or EU of the fediverse, where major instance admins meet and set agreed upon standards of instance behavior.

I'm not sure that would help with this particular issue -- and there's already a fair amount of bad relations between instances so I don't think a wider fediverse board is likely to succeed even if it could help somehow... I guess instance admins that do agree on general moderation principles could help co-admin each other's instances to cover better for when they're offline (maybe some of them already do?), but we shouldn't have to depend on remote admins being responsive to deal with an issue affecting our instance.

e0qdk,
e0qdk avatar

Some ideas for anti-spam measures that might help:

  • block users who post flood -- e.g. if an account makes 10 posts a minute, it's a spammer
  • block accounts that end up massively in the negative shortly after they start posting -- e.g. an account at -50 within 15 minutes of making its first post is probably a spammer (exact thresholds may need some tuning). Note that this is different from blocking new accounts that go into the negative since people can register accounts in advance of an attack and wait until later to cause disruption.
  • block users who post repetitive comments/links excessively -- e.g. if the same link is in 10 comments/posts from the last hour or they've submitted the exact same comment a dozen times, the account is probably a spammer (again, thresholds may need tuning); that won't catch all the bots (one of them added a bunch of random words) but will catch some of them. More clever filtering could catch the other bots.
  • block new posters who are reported many times by established accounts in good standing -- at least until an admin can check what is going on
e0qdk,
e0qdk avatar

[coreutils-announce] coreutils-8.31 released [stable]

stat now prints file creation time when supported by the file system,
on GNU Linux systems with glibc >= 2.28 and kernel >= 4.11.

https://lists.gnu.org/archive/html/coreutils-announce/2019-03/msg00000.html

(found thanks to this blog post titled "File Creation Time in Linux")

e0qdk,
e0qdk avatar

Haven't used that particular library, but have written libraries that do similar sorts of things and have played with a few other similar libraries in C++ and Haskell. I've taken a quick glance at the documentation here, but since I don't know this library specifically apologizes in advance if I make a mistake.

For OneOrMore(Word(alphanums)) + OneOrMore(Char(printables)) it looks it matches as many alphanum Words as it can (whitespace sequences being an acceptable separator between tokens by default) and when it hits ( it cannot continue with that so tries to match the next expression in the sequence. (i.e. OneOrMore(Char(printables)))

The documentation says:

Char - a convenience form of Word that will match just a single character from a string of matching characters

Presumably, that means it will not group the characters together, which is why you get individual character matches after that point for all the remaining non-whitespace characters. (Your result also seems to imply there was a semicolon at the end of your input?)

For OneOrMore(Word(alphanums)) + OneOrMore(Char(string.punctuation)) it looks like it cannot match further than ( since 1 is not a punctuation character; so, you got the tokens for the parts of the string that matched. (If you chained the parser expression with something like + Word(alphanum) I'd expect you'd get another token [i.e. "1"] added onto the end of your result.) You may eventually want StringEnd/LineEnd or something like that -- I'd expect they'd fail the parser expression if there's unconsumed input (for error detection), but again, haven't used this specific library, so it may work different than I expect.

There appears to be a Combine class you can use to join string results together; that might be useful for future reference.

i was trying to parse a string with pyparsing so all the words were separated from the punctuation signs

Have not tested it (since I don't have a copy of the library installed anywhere and can't set up an environment for it easily right now) but perhaps something like OneOrMore(Word(alphanums)|Char(string.punctuation)) would be more like what you are looking for?

RTR#34 Paperwork matters and back to the whiteboard

Today, after the morning code cleanup, I took care of a few formalities that I need to handle before the end of the year. Tomorrow, I'll have to dedicate some more time to it. In addition to the daily dose of refactoring, I'll probably take a break from coding and focus on planning AP module on the board for tomorrow and the day...

e0qdk,
e0qdk avatar

Thanks Ernest!

BTW, I noticed this was posted as RTR#33 and there are two RTR#32 entries; I think this should actually be RTR#34.

e0qdk, (edited )
e0qdk avatar

So I either need something like this that I could host myself (is something like that even feasible?)

The closest thing I could find that already exists is GPT4All Chat with LocalDocs Plugin. That basically builds a DB of snippets from your documents and then tries to pick relevant stuff based on your query to provide additional input as part of your prompt to a local LLM. There are details about what it can and can't do further down the page. I have not tested this one myself, but this is something you could experiment with.

Another idea -- if you want to get more into engineering custom tools -- would be to split a document (or documents) you want to interact with into multiple overlapping chunks that fit within the context window (assuming you can get the relevant content out -- PyPDF2's documentation explains why this can be difficult), and then prompt with something like "Does this text contain anything that answers <query>? <chunk>". (May take some experimentation to figure out how to engineer the prompt well.) You could repeat that for each chunk gathering snippets and then do a second pass over all snippets asking the LLM to summarize and/or rate the quality of its own answers (or however you want to combine results).

Basically you would need to give it two prompts: a prompt for the "map" phase that you use to apply to every snippet to try to extract relevant info from each snippet, and a second prompt for the "reduce" phase that combines two answers (which is then chained).

i.e.:

f(a) + f(b) + f(c) + ... + f(z)

where f(a) is the result of the first extraction on snippet a and + means "combine these two snippets using the second prompt". (You can evaluate in whatever order you feel is appropriate -- including in parallel, if you have enough compute power for that.)

If you have enough context space for it, you could include a summary of the previous state of the conversation as part of the prompts in order to get something like an actual conversation with the document going.

No idea how well that would work in practice (probably very slow!), but it might be fun to experiment with.

e0qdk,
e0qdk avatar

I haven't used OpenBSD specifically, but I have used a FreeBSD derivative on a NAS. I'd recommended reading the documentation for common commands; if you're coming from Linux the flags and behaviors may be different from what you're used to.

e0qdk,
e0qdk avatar

This is a composite I stitched together from 12 screenshots taken from episode 5 of Penguindrum. I pretty much completely rewrote the tool I used for my last composite to make this. My old program could only handle solving two images; the new version solves a graph of correspondences one pair at a time to attach as many images as it can to a pinned starting image. I spent all weekend writing this specific editor, and much of my free time over the last couple weeks has gone into my broader art tool project.

I haven't done much in the way of artistically changing the piece. Other than the final cropping (which I did in the GIMP), it's as true to the imagery from episode 5 as I could get it. Notably though, the lower portion of the image and the upper portion of the image (with the reapers) are not from one continuous shot in the episode; part of my motivation for stitching this was that I wanted to see how it all looked when put together. I was surprised to find that the top portion with the reapers was actually scaled differently than the lower portion. I've cropped it for the main post, but if you'd like to see what it looks like without the final crop, you can see that image here.

The "lovers" are clearly a parody of The Kiss by Klimt -- https://en.wikipedia.org/wiki/The_Kiss_(Klimt) -- but I have no idea what the heck is going on with the cherubs in the middle...

e0qdk, to kbinMeta
e0qdk avatar

@ernest -- As requested by RTR#32, I got an error upvoting a thread.

Details:

Approximately 2023-12-04 20:50 UTC

Thread link on kbin: https://kbin.social/m/technology@lemmy.world/t/678410/Physicists-May-Have-Found-a-Hard-Limit-on-The-Performance

URL when error was shown: https://kbin.social/ef/678410?choice=1

Let me know if there's any other details you need.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • Durango
  • mdbf
  • osvaldo12
  • magazineikmin
  • tacticalgear
  • rosin
  • thenastyranch
  • Youngstown
  • InstantRegret
  • slotface
  • everett
  • kavyap
  • DreamBathrooms
  • JUstTest
  • khanakhh
  • ethstaker
  • cubers
  • tester
  • modclub
  • ngwrru68w68
  • GTA5RPClips
  • cisconetworking
  • megavids
  • anitta
  • normalnudes
  • Leos
  • lostlight
  • All magazines