@philiphubbard@fediscience.org
@philiphubbard@fediscience.org avatar

philiphubbard

@philiphubbard@fediscience.org

Computer science, biology.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

philiphubbard, to Neuroscience
@philiphubbard@fediscience.org avatar

Anthropic Opus beats GPT-4 when translating text like this to input for a video from neuVid: "Frame on the main ROI from the Janelia MANC. Fade it on over 1 sec. Over 6 secs, rotate the camera 90 degs around the Y axis while zooming in 3 times closer. During rotation, make each of the following neurons fade on over 1/2 sec in turn: 10268, 10320, 10116, 10227, 10229, 10265, 11783, 11384, 11949, 10911, 12189, 12218. Wait 1/2 sec then fade everything off taking 1 sec."
(1/2)

The overall outline of the Drosophila nerve cord rotates while 12 neurons appear one after another.

philiphubbard,
@philiphubbard@fediscience.org avatar

The "claude-3-opus-20240229" model gives higher-quality translation for tests with this case and 19 others:
https://github.com/connectome-neuprint/neuVid/blob/master/test/test-generate-expected.txt

It also costs less money. The nearest OpenAI model in quality is "gpt-4-0613", which works better on these tests than the newer "gpt-4-0125-preview" model, but is priced higher. Surprisingly, the larger context of the newer model does not improve quality for these tests.

It's difficult to compare runtimes due to high traffic for the servers.
(2/2)

philiphubbard, to unity
@philiphubbard@fediscience.org avatar

The Janelia Toolkit now supports panoramas, back-projected onto a cylindrical screen, updated in real time for a tracked viewpoint within the cylinder. The intended application is for studies of animals like . The code is a byproduct of another project so it's a bit experimental, but it's fun to watch examples like this one, meant to be displayed with three adjoining projectors. (1/2)

https://github.com/JaneliaSciComp/janelia-unity-toolkit

First, an overhead view of a position moving through a meadow. Then, a panoramic video from the viewpoint of the moving position.

philiphubbard,
@philiphubbard@fediscience.org avatar

The curvature of a cylindrical screen can cause brightness loss and color artifacts near the boundaries between different projectors' images. To compensate, I'm trying two additional textures, each with a scaling factor for interactive adjustments. Here is the effect of a brightness correction texture (top) and a color correction texture (bottom, removing a red artifact), again for three adjoining projectors. (2/2)

amytabb, to photography
@amytabb@hachyderm.io avatar

Hello! I'm a researcher in Computer Vision & Robotics for Agriculture. I find that on here, I am mainly posting photographs and not talking about tech & research stuff, though occasionally I post about tech and research stuff.

I have chronic disease, it sucks.
(re-introduction as I switch instances...)

Interests: , ,
Photo: our dog, who we refer to as the pup.
.

philiphubbard,
@philiphubbard@fediscience.org avatar

@amytabb Agriculture sounds like an interesting application for Computer Vision & Robotics, so I hope you post (accessible) discussions of that topic, too!

philiphubbard, to conservative
@philiphubbard@fediscience.org avatar

The Visualizing Biological Data 2024 workshop (VIZBI), March 13-15 in Los Angeles, looks very interesting. Speakers will discuss visualization in areas like , and . Participants can present a poster and lightning talk. Mine will cover neuVid, which simplifies video production with . Here it is showing some neurons from the FlyWire data set, using to parse my natural language description of the video.

https://vizbi.org/2024

(1/3)

Four ACH and two GABA neurons from the FlyWire data set.

philiphubbard,
@philiphubbard@fediscience.org avatar

Here is the natural language description of the video, and the generated JSON file that neuVid uses to drive Blender. (Before rendering, I manually added the line "lightPowerScale": [0.5, 0.5, 0.5] because I thought it brought out the shadows.) The simple desktop application for running the generation is available on the releases page of the neuVid GitHub site.

https://github.com/connectome-neuprint/neuVid

(2/3)

philiphubbard,
@philiphubbard@fediscience.org avatar

The FlyWire site features access to the data set with powerful search and visualization tools.

https://flywire.ai

(3/3)

philiphubbard, to random
@philiphubbard@fediscience.org avatar

"HHMI Opens National Competition for Hanna H. Gray Fellows Program. The program provides up to $1.5 million in support for early career scientists who have the potential to become inclusive leaders in academic research."

https://hhmi.org/news/hhmi-opens-national-competition-hanna-h-gray-fellows-program

alchemistsstudio, to art
@alchemistsstudio@mstdn.social avatar
philiphubbard,
@philiphubbard@fediscience.org avatar

@alchemistsstudio Very nice! I bet your cat cups are easier to drink from than those I used to make.

philiphubbard, to random
@philiphubbard@fediscience.org avatar

One of my best decisions recently was to just sit and listen to symphonies nos. 1 and 3 by Florence Price. It’s a shame her work is relatively unknown, and I’m grateful that I have heard it on WETA (Washington, DC).

https://en.m.wikipedia.org/wiki/Florence_Price

philiphubbard, to random
@philiphubbard@fediscience.org avatar

On this day 20 years ago was the theatrical release of P. J. Hogan’s “Peter Pan”. ILM did some visual effects, and I got my first screen credit, for R&D (albeit with my name misspelled).

https://en.m.wikipedia.org/wiki/Peter_Pan_(2003_film)#/media/File%3APeter_Pan_2003_film.

philiphubbard, to Neuroscience
@philiphubbard@fediscience.org avatar

Last summer, postdocs Gabriela Michel and Emmanuel Marquez Legorreta travelled to Ghana to present a course in computational and . It was so successful they are planning another course for next year, in Rwanda.

https://www.janelia.org/news/janelia-postdocs-help-organize-course-in-ghana

philiphubbard, to Neuroscience
@philiphubbard@fediscience.org avatar

The neuVid system for / videos now supports . Given natural language, it makes a video with (). E.g.:

"Frame on neuron 393766777 from the Janelia hemibrain. Orbit the camera 45 degrees over 6 seconds, and move in 25% while orbiting. 1 second in, fade on neuron 1196854070 over 1 second. Then fade on the output synapses of 393766777 connecting to 1196854070 taking 1 second. Synapses should be extra big."

https://github.com/connectome-neuprint/neuVid

(1/3)

A video of two neurons from the Janelia FlyEM hemibrain data set and the synapses connecting them.

philiphubbard,
@philiphubbard@fediscience.org avatar

There is a simple desktop application for entering the natural language input, submitting it to the LLM, and saving the generated JSON, which then gets processed by the rest of neuVid to make the video. The application supports editing and undo/redo to extend the video or handle any mistakes the LLM may make. And mistakes do happen sometimes, the state of affairs these days for work with even the best LLMs like GPT-4.

(2/3)

philiphubbard,
@philiphubbard@fediscience.org avatar

Currently, the system uses GPT-4, which has worked the best of the models I have tried. I would like to try other advanced models as access becomes available.

(3/3)

philiphubbard,
@philiphubbard@fediscience.org avatar

@jni @kristinmbranson thanks. I like that with this kind of application, it’s immediately obvious if the LLM makes a mistake. Or maybe its “mistake” is better than what had been intended.

philiphubbard, to blender
@philiphubbard@fediscience.org avatar

A fascinating blog by @stamen on "Blender maps". They are "terrain maps distinguished mainly by their big, long, dramatic shadows," and get their name from the use of () to create them. The blog discusses many interesting aspects of the history, variations, and impact of this technique.

https://stamen.com/shadows-on-maps-are-getting-a-lot-more-exciting-and-heres-why

philiphubbard, to Neuroscience
@philiphubbard@fediscience.org avatar

SWC files are a common way to represent neurons in mouse research, and they are supported now by the neuVid video system (github.com/connectome-neuprint/neuVid). Here is an excerpt from a longer video, of the 1227 neurons reconstructed by the MouseLight project. The link to the full video is: https://www.youtube.com/watch?v=aGYJGr-vTHI

Neurons appearing within the shell of the mouse brain.

philiphubbard, to Neuroscience
@philiphubbard@fediscience.org avatar

Some colleagues are working on videos that will identify dozens of different neuron types. To help, I extended the neuVid video system to support textual labels. Here is a test, showing the neurons that help the maintain heading (taken from the FlyEM hemibrain data set). Specifying labels directly in the JSON input file makes it simpler to manage the labels and their timing relative to the rest of the animation. For more details, see: https://github.com/connectome-neuprint/neuVid.

50 EPG and EPTt neurons from the Drosophila fruit fly.

philiphubbard, to python
@philiphubbard@fediscience.org avatar

works well for bundling a script and its dependencies into an executable that is easy to share. On macOS, the executable is easiest to use if it has code signing and notarization. Notarization with Nuitka is not well documented, so take a look at the detailed instructions here: https://github.com/JaneliaSciComp/python-dist-demo

philiphubbard, to llm
@philiphubbard@fediscience.org avatar

Liu et al show that an fails to effectively use relevant information in the middle of its input context, and a longer context (e.g., gpt-3.5-turbo-16k) helps little. The implications are significant when using LLMs to perform tasks based on custom information (like retrieved documents) added to the prompt. It also matches my experience, that a context with concepts A and B will work well for a task involving B, but a context with A, B and C will stop working.
https://arxiv.org/abs/2307.03172

philiphubbard, to random
@philiphubbard@fediscience.org avatar
philiphubbard, to Neuroscience
@philiphubbard@fediscience.org avatar
philiphubbard, to Neuroscience
@philiphubbard@fediscience.org avatar

The FlyEM MANC is now available on NeuronBridge.janelia.org. researchers can use this tool to match the MANC electron (EM) data against light microscopy (LM) data, like that from the Janelia FlyLight project. A user can interactively examine matches right in the browser by clicking the "View in 3D" button. Interactive views are sharable, too... (1/3)

Interactive rendering of a Janelia FlyEM MANC neuron matched with a Janelia FlyLight LM volume.

philiphubbard,
@philiphubbard@fediscience.org avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • rosin
  • thenastyranch
  • ethstaker
  • DreamBathrooms
  • osvaldo12
  • magazineikmin
  • tacticalgear
  • Youngstown
  • everett
  • mdbf
  • slotface
  • ngwrru68w68
  • kavyap
  • provamag3
  • Durango
  • InstantRegret
  • GTA5RPClips
  • tester
  • cubers
  • cisconetworking
  • normalnudes
  • khanakhh
  • modclub
  • anitta
  • Leos
  • megavids
  • lostlight
  • All magazines