jonny,
@jonny@neuromatch.social avatar

Scientists will be like "results should be replicable!" but then do all their experiments with a random walk of homebrew code that runs on four computers networked with a nest of BNC cables, each with a different version of MATLAB, and after every experiment the data is saved by walking a flash drive around to each of them since they cant be connected to the internet because one of them still runs Windows XP and if the rest so much as heard of a software update the work of 5 grad students whose whole PhD was spent setting up this monstrosity would be ruined forever.

jonny,
@jonny@neuromatch.social avatar

"To start the experiment, put the mouse in the box, and then run over to press the space bar on the behavior computer before they can touch the lever or else the uninitialized NI driver will crash and you'll need to restart everything.

Then go over to the LabVIEW computer, wait until you can hear the near-ultrasonic sound of the stepper motors starting up and then press the start button. If you do it before then, the phases of the experiment will be misaligned and you wont know until the experiment is over.

Then start the scanimage computer, minimize the one hundred windows that pop up, and resize the preview window to the single height and width that avoids a moire pattern that will ruin the data since for some reason whatever is displayed there is what gets saved to disk.

The last computer is just for receiving TTL pulses to align the data afterwards since none of the other components can talk to each other, and before you start you should set the system time to 1980-01-01T00:00 because it runs on a version of Linux intended to be embedded in mall kiosks that uses an unspeakably rare 24-bit time and all the labs analysis code has been built around that."

jonny,
@jonny@neuromatch.social avatar

Im only slightly exaggerating. Most of these I have actually seen in practice with some names changed.

And as usual the caveat: its like this not bc ppl are stupid or lazy, but bc our busted ass systems dont reward infrastructure work or provide proper training, and this was the best the poor biology grad student could do given their PI breathing down their neck for data and the already fucked up system they inherited and had to build off of.

jonny,
@jonny@neuromatch.social avatar

Im like dog whats the point of open data if it doesnt include the 100 secret hardcoded variables that control absolutely crucial details like reward size or light cycle or display LUT and etc. that completely define the outcome of the experiment. even assuming the rest of the code and hardware work 100% as intended.

It took years of design to make a system that allows you to fiddle with all the values and capture all the phases of an experiment so you could find the sweet spot that works while preserving provenance. Most of the time people just... dont worry about that.

moritz_negwer,
@moritz_negwer@mstdn.science avatar

@jonny This is too true. I have vivid memories of finding that one particular lab hardware PC (running WinXP and an ancient version of the proprietary controller software) had been in use for at least 12 years. Several PhD's worth of data was on its lone HDD, without consistent backup. Everyone with decision power was surprised to learn about the bathtub curve of hardware failure.

It was - eventually, and after much pleading - replaced, at least.

moritz_negwer,
@moritz_negwer@mstdn.science avatar

@jonny Your point about infrastructure work not being rewarded is spot-on, though. Lots of research involves finding strange and esoteric measurements without any architectural planning at all. Cobbling together a Rube-Goldberg Machine on the spot is probably the quickest way to get there.

As a positive example of good documentation, I would like to offer https://livemousetracker.org from Thomas Bourgeron's lab, a DIY open-hardware behaviour platform that comes with Ikea-style build instructions.

jonny,
@jonny@neuromatch.social avatar

@moritz_negwer
OMG I LOVE the illustration in the instructions

jonny,
@jonny@neuromatch.social avatar

@moritz_negwer
Here's mine
Guides:
https://wiki.auto-pi-lot.com/index.php/Autopilot_Behavior_Box
https://wiki.auto-pi-lot.com/index.php/Autopilot_Tripoke

3D Printed parts:
https://wiki.auto-pi-lot.com/index.php/Schematics

Example of how code sharing and methods sections could work:
https://wiki.auto-pi-lot.com/index.php/Plugin:Autopilot_Paper
Idk semantic wikis are pretty cool.

Thats all hybrid natural language and structured data so like all that metadata is also an RDF graph
https://wiki.auto-pi-lot.com/index.php/Special:ExportRDF

And it integrates with the software to power like plugins and yno being able to take the values from a datasheet and use them to calibrate hardware or compute compatibility between hardware configs
https://docs.auto-pi-lot.com/en/latest/guide/plugins.html

And the software isnt so bad
https://docs.auto-pi-lot.com/en/latest/index.html
https://github.com/auto-pi-lot/autopilot

And there are some ok illustrations in the paper I think
https://doi.org/10.1101/807693

That project sort of developed its own design style that was very flat and idk sort of like a "playful schematic" but I am not kidding when I say after seeing those idea drawings I want to do that for this current project.

These I think are some of my most IKEA-like things:

Autopilot behavior box panels cut sheet: a diagram showing how to cut a large 72" x 48" sheet of acrylic into smaller subpanels for laser cutting.

jonny,
@jonny@neuromatch.social avatar

@moritz_negwer
I designed that software so you could cobble together Rube Goldberg machines responsibly and with support, and so your cobbling could be shared in such a way that it helped others with their cobbling with as low of a threshold as possible while independently, visibly crediting you.

I am a friend of the cobbler and the hacker, my ideal world is definitely not machinelike sameness

jonny,
@jonny@neuromatch.social avatar

@moritz_negwer
Big standardized observatories are cool and good for some things. Wild ass free ranging collaboration based on surfacing and aligning invisible labor is another way. Two of many strategies for reliable science ❤️, but havent had much luck yet.

moritz_negwer,
@moritz_negwer@mstdn.science avatar

@jonny Wow, this looks really nice! Will recommend this to my behaviour friends :)

I'm all for wild-ass free-ranging coalitions of tinkerers, surfacing and documenting all the hidden variables that influence experiments.

You could frame this as a publication medium constraint as well: An evolving Wiki probably is a better format to document all the tacit knowledge than a paragraph in a methods section.

jonny,
@jonny@neuromatch.social avatar

@moritz_negwer
Yes i suppose one could frame it that way :)
https://jon-e.net/infrastructure/#experimental-frameworks
https://jon-e.net/infrastructure/#applications

Applications Continuing the example of the Autopilot wiki, we could make an array of technical knowledge wikis. Wikis organized around individual projects could federate together to share information, and broader wikis could organize the state of our art which currently exists hollowed out in supplemental methods sections. The endless stream of posts asking around for whoever knows how to do some technique that should be basic knowledge for a given discipline illustrate the need. Across disciplines, we are drenched in widely-used instrumentation and techniques without coherent means of discussing how we use them. Organizing the technical knowledge that is mostly hard-won by early career researchers without robust training mechanisms would dramatically change their experience in science, whittling away at inequities in access to expertise. Their use only multiplies with tools that are capable of using the semantically organized information to design interface or simplify their operation as described in experimental frameworks.
Technical wikis could change the character of technical work. By giving a venue for technical workers to describe their work, they would be welcomed into and broaden the base of credit currently reserved only for paper authors. Even without active contribution, they would be a way of describing the unseen iceberg of labor that science rests on. Institutional affiliations are currently just badges of prestige, but they could also represent the dependence of scientific output on the workers of that institution. If I do animal research at a university, and someone has linked to the people responsible for maintaining the animal facility, then they should be linked to all of my work. Making technical knowledge broadly available might also be a means of inverting the patronizing approach to “crowdsourcing” “citizen science” by putting it directly in the hands of nonscientists, rather than at the whim of some gamified platform (see [313]).
These technologies point to a few overlapping and not altogether binary axes of communication systems. Durable vs Ephemeral - journals seek to represent information as permanent, archival-grade material, but scientific communication also necessarily exists as contextual, temporally specific snapshots. Structured vs Chronological - scientific communication both needs to present itself as a structured basis of information with formal semantic linking, but also needs the chronological structure that ties ideas to their context. This axis is a gradient from formally structured references, through intermediate systems like forums with hierarchical topic structure that embeds a feed, to the purely chronological feed-based social media systems. Messaging vs Publishing - Communication can be person-to-person, person-to-group with defined senders and recipients, or person-to-all statement to an undefined public. This ranges from private DMs through domain-specific tool indexes like OpenBehavior through the uniform indexing of Wikipedia. Public vs. Private - Who gets to read, who gets to contribute? Communication can be composed of entirely private notes to self, through communication in a lab, collaboration group, discipline, and landing in the entirely public realm of global communication.

moritz_negwer,
@moritz_negwer@mstdn.science avatar

@jonny

To paraphrase your labor argument, the current system incentivizes a privatization of the gains (publications = reputation economy credits) and a socialization of the losses (undocumentable environmental factors and proprietary hacks = documentation debt). Let this rot for too long, and you get a sub-prime evidence (=reproducibility) crisis.

In that sense, kudos to you for paying down some of the field's debt!

jonny,
@jonny@neuromatch.social avatar

True story: when I first started grad school, water rewards were delivered by gravity fed tubes with a 100cc syringe body taped to the wall as the reservoir. none of the mice were learning the task and all had response biases. Then one day the tape lost adhesion and the syringe fell down. We taped it back up, and then all the mice in that box started learning. It turns out we had taped it about 8 inches lower than it was before, and the tiny reduction of reward delivered from the reduction of gravity was enough to motivate the mice to pay attention.

Has anyone ever read anything like that in a methods section?

jonny,
@jonny@neuromatch.social avatar

I worked on behavior tooling both bc it is really frustrating to see everyone struggling all the time and because its a freaking labor and equity issue. What im talking about here are our working conditions. And very few labs can afford the fleet of postdocs to keep the ship afloat, or work down the hall from the guy who invented Silicon or whatever who can show you the True Secrets.

jonny,
@jonny@neuromatch.social avatar

Also how dope would it be if we could just do stuff and it works and then you can share what you did with other people and they can do it too even if they have different stuff than you and then they change it and share it again...

RuStelz,

@jonny Do androids dream of electric sheep crossed with Mona Lisa Overdrive, a Bit Stanislaw Lem , but Most of all Cordwainer Smith, this is what your prosa reminds me of.

https://en.wikipedia.org/wiki/Cordwainer_Smith#:~:text=(%22Cordwainer%22%20is%20an%20archaic,skilled%20workers%20with%20traditional%20materials.)

jonny,
@jonny@neuromatch.social avatar

@RuStelz
I read no fiction but PKD for two years because I couldn't get enough and washed that down with a Lem binge. Now i have two more people to read. ❤️ thank you for this kind compliment.

DamonHD,
@DamonHD@mastodon.social avatar

@jonny Ha: my current work is a few hundred lines of very simple Java already posted in a GitHub repo mirrored to Zenodo. Death through underwhelm is a bigger risk than unreproducability! B^>

jonny,
@jonny@neuromatch.social avatar

@DamonHD
Hell ya! If ya got your tests and your build environment and your docs then that's one part of the software squared away, which is saying a lot and I mean as a genuine compliment

hrefna, (edited )
@hrefna@hachyderm.io avatar

@jonny My father's lab had a way of purifying a toxin that they had done for maybe 20 years.

Then one day it stopped working and no one could figure out why. None of the grad students or techs could do it any more.

It turned out there's a step where they would put it in a flask and then cover that flask with foil for the next step. They had been crimping it down to keep it in place.

It turns out that it needs to be loose to allow oxygen mixing. Something no one had known previously

jonny,
@jonny@neuromatch.social avatar

@hrefna
Stuff like this makes me want to have CCTV recording 24/7 in lab spaces and the only way you could access it would be behind a plate of glass labeled "break in case of sudden core lab protocol failure" to figure out which of 500 unknown variables changed

lampsofgold,
@lampsofgold@veoh.social avatar

@jonny this is so real and it sucks because no one wants to invest in untangling the rat’s nest, so even though I’d do it for my whole career if I could, no one will pay for it

jonny,
@jonny@neuromatch.social avatar

@lampsofgold i'm basically dancing this dance until the music stops, biting the hand that feeds all the while

albertcardona,
@albertcardona@mathstodon.xyz avatar

@jonny

We once referred to this as:

"A problem often related as 'the computer science PhD student moved on, and we do not know what parameters were used, neither what the magic numbers mean'."

https://www.nature.com/articles/nmeth.2082

The project aimed at addressing these issues for bioimage informatics, and has largely succeeded.

jonny,
@jonny@neuromatch.social avatar

@albertcardona wow, same as it ever was

albertcardona,
@albertcardona@mathstodon.xyz avatar

@jonny

Very much so. One has to plan from the beginning. At least nowadays version control is, I hope, the norm, unlike 10 years ago, which means there's a higher chance of code and documentation existing beyond the very short academic contracts and cycles.

SnoopJ,
@SnoopJ@hachyderm.io avatar

@jonny @meejah the microscopy lab I got to use as an undergrad had data interchange that was canonically shuttling data between two systems using a Zip disk, because the systems couldn't be networked for Reasons and the USB 1.0 port on the source machine was a false flag: no pendrive a human being would bring into that lab would work there.

Of course, that was the high-tech room. The lower-tech room with a PC used for running detector counts had slightly older machines and practices…

shieldsy05,

@SnoopJ @jonny @meejah a diagnostic lab I worked in for research stuff used a gamma counter to determine results of a particular iodisation assay.

The connected computer still ran an MS-DOS style operating system. The connected parallel port printer had broken, and they had no other way to get the results from the machine.

I of course tried to solve the problem: ended up copying the entire hard drive to a floppy disk, loading the relevant program onto a computer running XP, and figured out how to trick the program into sending the ‘print’ data to a USB printer.

Then, of course, they never used it again 🤷‍♂️

jonny,
@jonny@neuromatch.social avatar

@shieldsy05 @SnoopJ @meejah omg stop the 'they never used it again' part is too much for me

jonny,
@jonny@neuromatch.social avatar

@shieldsy05 @SnoopJ @meejah (but actually dont' stop if you have more good stories i love hearing them because somehow scientists manage to work themselves into some purely slapstick technological predicaments)

shieldsy05,

@jonny @SnoopJ @meejah haha oh god and diagnostics is the worst - no one ever wants change because they have to re-validate.

Okay, story 2: so there’s this assay called a CH50. Basically they dilute a patient’s serum to a bunch of different concentrations, and see how many sheep’s red blood cells the serum can burst at each concentration, by measuring the amount of free haemoglobin that’s left after removing any intact red blood cells. They do this by measuring the colour of the leftover fluid to see how red it is.

The instrument that measured the colour produced data in csv/excel format, which would then get transcribed manually into another spreadsheet to do some averages across duplicate or triplicate data.

Then they would plot the data on a graph.

Now, this type of data should usually fit what we now call an s-shaped/sigmoid curve, right?

Except instead of doing that in the excel spreadsheet they already had, they manually transcribed the data AGAIN into an old dos program called CURFIT. Which, as the name suggests, fits a curve…

Except back in the 80s? When this program was written, Sigmoid curves didn’t exist! (Or weren’t available on this particular program, and that wasn’t considered a problem).

Instead, CURFIT would try to fit a cubic polynomial. Which, look, must have worked for them for a good while.

The goal was to interpolate from the curve, how much serum it would take to lyse 50% of the red blood cells (or something like that. It’s been a while 😅)

But using a cubic polynomial had this quirk, that if the data at the ends was the slightest bit too far out, the curve wouldn’t fit properly, and made the whole day’s data practically useless.

Even when it did work, it would display the data on-screen, which would get transcribed by hand onto paper, and then get manually typed into that excel spreadsheet again!!

shieldsy05,

@jonny @SnoopJ @meejah

Story 3 (featuring the CH50 again!):

2-3 years before I started at the lab, they bought this automatic pipetting system. Would have cost them a lot.

Sat on the bench for years because neither the lab staff, nor the engineers from the manufacturer could get it to do what we wanted.

So I took a crack! It turns out the engineers were trying to precisely replicate the experiment (the way that the humans did it) not understanding which components of the test could be adjusted to make it easier to translate to the machine. Like, changing the arrangement of the tubes, or the samples in a 96-well plate.

I got it working, it could run most of the experiment, with a little human intervention (there was a water-bath step that was a bit tricky to incorporate).

Except because it had sat for so long without use, one of the four fluidics syringes needed replacing, and the lab didn’t want to pay for that.

Getting this working would have saved them 4-8 hours per week at the very least (combine that with improving the data workflow from story two would have saved them even more!).

shieldsy05,

@jonny @SnoopJ @meejah Story 4 (a short one):

There’s this assay called lymphoproliferation — basically seeing how much white blood cells replicate over time, by giving them a radioactive version of a DNA building block. They’d incorporate more each time the cells divided.

Used a beta counter to detect the radiation, again on a DOS program.

They bought a high-throughput setup, and because they didn’t understand it, couldn’t get it setup to do what they wanted, so again, gave up and never touched it again.

jonny,
@jonny@neuromatch.social avatar

@shieldsy05
I am just imagining an angry, pouty dad figure trying to figure out how to use a smartphone for the first time. "Where are the buttons? This thing is worthless!"

jonny,
@jonny@neuromatch.social avatar

@shieldsy05
Ok this one makes me curious - what were they trying to make the machine mimic the human version? Idk how to ask, what were the human practices that were hard for the machine?

And the complete inattention to what is truly costly rings clear as a bell. I have been trying to show my labmates how to write tests (~hours) so they dont have to be constantly be debugging the same problems (~weeks), but no luck yet.

GroberUnfug2,
@GroberUnfug2@mastodon.social avatar

@jonny @shieldsy05 @SnoopJ @meejah
We got a digitizer card that has some until today top notch specs and is needed cause of that. Problem is the only driver that fully utilizes this card is from the win7 32bit time and leaks sometimes memory. So my former Boss wrote the data acquisition so that it checks for that DMA leak and slowly decreases the residual size of it. Until a modal Dialog pops on screen the PC plays this windows didelidelit Sound and you have to restart the system. 😁

GroberUnfug2,
@GroberUnfug2@mastodon.social avatar

@jonny @shieldsy05 @SnoopJ @meejah was quite some work to figure this Trick out.
Before the hotfix the PC restarted a few Times a Day, now it's just once and you can Do it safely.

Sadly the Company that manufactures the card doesnt really believe in open source so the Group that uses this Hardware is more or less stuck to win7 because the newer drivers are slower than the old one.
Replacing everything would have a 6 digit price tag and even more vendor lockin... 🙈

jonny,
@jonny@neuromatch.social avatar

@GroberUnfug2 @shieldsy05 holy hell, working on a machine that is a known ticking time bomb sounds like psychological warfare.

shieldsy05,

@GroberUnfug2 @jonny same! We had a plate reader, which basically shines light through a sample in a 96-well plate, and records the amount of light absorbed or transmitted by the sample, and can filter the light going in and/or going out to specific wavelengths..

It wasn’t that old, but it used an old networking protocol called ARCnet, which was transmitted over a coaxial cable. Which meant that we couldn’t connect it to any other computer without swapping the PCI card over to that new computer as well, which we couldn’t do because none of the other computer we had had standard PCI slots that would fit cards with that form factor.

jonny,
@jonny@neuromatch.social avatar

@shieldsy05 @GroberUnfug2 what blows my mind is how often the underlying thing the instrument does is extremely simple - how bright is this (filtered) light. But the rest of the system is in place so you can't just rig something up yourself, the validation and engineering and patching and hacking are more costly than just sucking it up and staying the course.

meejah,
@meejah@mastodon.social avatar

@shieldsy05 @SnoopJ @jonny The most-dysfunctional company I ever worked at produced medical software. Not quite this level of weirdness, but wow "medical software" in general is definitely fraught.
I'm sure one could get a few dozen theses out of trying to answer "why" ;)

Huia_fishocean,

@jonny Thankfully there are many scientists doing in a professional, organized, and reproducible framework.

jonny,
@jonny@neuromatch.social avatar

@Huia_fishocean
Yes, thankfully this is not about them ♥

teixi,
@teixi@mastodon.social avatar

@jonny

Recalled this talk in which good veteran Rob Williams tells this crude reality:

» Most datasets we generate ... are vulnerable to evaporation! «

Then rhetorically asks his audience of young researchers this pragmatic maintenance question:

» Can you go back 10 years in your lab, in your career...
fish out your 10 year old data,
and rerun the same algorithm you ran 10 years ago? «

~ 8:45
https://youtu.be/watch?v=4ZhnXU8gV44&t=525s

ps: What happened to the FAIR/O data initiative?
https://en.wikipedia.org/wiki/FAIR_data

jonny,
@jonny@neuromatch.social avatar

@teixi
Right but like what im saying is even if you could do that, if you couldnt also at least inspect the conditions of its collection then you actually dont know to what degree the data is grounded in reality. You can replicate the numbers from before, but you cant say they match the thing they purport to measure (in a tangible, not philosophical way).

eg the dataset and analysis code in the OP is meticulously public. The experimental apparatus is not outside of a one paragraph explanation. I cant know if the stimulus was actually presented for 230ms or 250ms (in this case that matters), whether the timestamps are wrong, or whether there was some framerate quantization on the monitor, or some problem with the clock on the microcontroller, or any of the million lines of code are wrong.

All these conversations are always about the data, and maybe the code for generating the figure, but nobody touches experimental transparency or the tooling that it would require to make that even remotely possible

teixi,
@teixi@mastodon.social avatar

@jonny
@hipsterelectron

Aye!

Good News: FAIR is alive!

https://neuromatch.social/@INCF/111607371033222421

Bad News: Many standards!

https://www.incf.org/resources/sbps

jonny,
@jonny@neuromatch.social avatar

@teixi @hipsterelectron yes, and i don't think that many standards are necessarily a bad thing - everyone has very different needs, what geologists need might be radically different than what neuroscientists need, which is even more radically different than what literature scholars might need. we need a linking and translation layer, not to collapse the many standards :)

teixi,
@teixi@mastodon.social avatar

@jonny @hipsterelectron

IMHO Already too many standards confusion!

As you point out, mostly because of poorly designed/shown site info: Basic design for standards contributors needs. Not for software implementors, final users.

ie: A lab tech doing studies with electrophysiology and optogenetics tech methods: How/which standards to choose?

jonny,
@jonny@neuromatch.social avatar

@teixi @hipsterelectron definitely agreed that a lot of the docs aren't focused on meeting the intended ppl using it where they are, it's a hard problem and the standards ppl i spend time with spend a lot of time trying to close that gap.

I think another route to make that less of a paralyzing decision is to make the choice less binding - if i pick this standard, no worries i can easily translate what i've done into another later, and maybe back again as need be.

karihoffman,
jonny,
@jonny@neuromatch.social avatar

@karihoffman
I saw it and am def in those circles :)

Apfelphi,

@jonny that sums up our lab setups perfectly. Throw in some crucial hardware that hasn't been produced since the eighties and intermediate formatting of data in excel

jonny,
@jonny@neuromatch.social avatar

@Apfelphi if it makes u feel any better i have seen much much much scarier intermediate data steps than excel, which is an actually p reasonable tool. lets just say you can treat JSON as a big long string if you really want to.

johncarneyau,
@johncarneyau@mastodon.social avatar

@jonny it should be added that the reason that PC is still running XP is because the propriety software that runs a critical piece of lab equipment only runs on that OS.

jonny,
@jonny@neuromatch.social avatar

@johncarneyau very true. and how there are probably numerous attempts at making open source versions of that equipment but nobody adopted them because their systems were already too entrenched to risk being an early adopter and so they languished. or nobody could convince a funding agency to take a risk on making the open source version because of the catch-22 of scientific FOSS where something has to already demonstrate wide use to be funded, but to be widely used (and exist at all) it needed to have already been funded.

yak,

@jonny
I'm one of those grad students that maintains a huge, bloated, undocumented, mostly uncommented, absurdly complex piece of software. Currently we can't replicate a lot of the results we claim in old papers, and we can't even figure out why because it's been through at least three different version control systems.

I rewrote the makefiles and I feel like I deserve a Nobel prize just for that.

jonny,
@jonny@neuromatch.social avatar

@yak bless u <3

onepict,
@onepict@chaos.social avatar

@jonny I remember in my comp sci university lab, the machines had no form of packaging management.

I remember we had a serious lab issue where we couldn't do our assignments and I'd just come off an industrial placement, where we had package management, so we'd wipe and reinstall. I remember arguing with a senior lecturer about it.

The applications were installed manually so if there was uniformity it was accidental. So I can imagine it being even more random in the Sciences.

maegul, (edited )
@maegul@hachyderm.io avatar

@jonny

This hurt to read. Oh the trauma.

For those who haven’t seen this, it’s accurate. Like science in real life is done this way daily.

jonny,
@jonny@neuromatch.social avatar

@maegul
When a bigtime fraud drops sometimes ppl say "you should see what the ppl who dont get caught do," but really I think it should be "you should see whats considered completely normal to do"

karihoffman,

@jonny @maegul and to ask trainees for more robustness of design (and proper documentation) would unreasonably hold up their progress in a system so hopped up on its academic ‘roids of pub count and ridiculous journal name snobbery, that there’s no wiggle room left for the quality of work. So? In the name of good practice and open science, and mercy for the trainees, supervisors (ok, it’s me) ask for funds for separate staff to facilitate robustness of data collection and processing, inly to have grant reviewers cut staff positions as excessive b/c it’s what the students should be doing 🤦🏻 ➡️I legit want funding agencies that ask for “open anything” to pony up for dedicated support for non-empire-size labs to make it happen. Else it disproportionately affects the under-resourced and developing research groups.

jonny,
@jonny@neuromatch.social avatar

@karihoffman
Amen. And ideally that would be by funding the tooling to make that possible for everyone rather than doing what they're doing now which is just requiring each additional grant have an earmark for open-whatever
@maegul

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • tacticalgear
  • DreamBathrooms
  • cisconetworking
  • khanakhh
  • mdbf
  • magazineikmin
  • modclub
  • InstantRegret
  • rosin
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • ngwrru68w68
  • JUstTest
  • everett
  • tester
  • cubers
  • normalnudes
  • thenastyranch
  • osvaldo12
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • megavids
  • lostlight
  • All magazines