debugging tests that fail only in #gitlab#ci is the worst... #sourcehut absolutely has it right by allowing you to SSH into a runner that failed a job.
Every time I'm working with a #CI, I get amazed by how powerful they can get and how much they have advanced the software development realm. Tons and tons of checks and builds and etc. can be done automatically upon merging a pull request.
I wholeheartedly thank everyone how have invested the time and effort to write and maintain all these pipelines.
These said, I'm really interested to see if any active and high-profile project is using #Guix or #Nix in CI instead of these NodeJS-based ones.
I need a Github Action template "upload-to-S3-provider-who-is-not-AWS-for-dumb-bimbo" because damn that overcomplicated devops ecosystem is gatekeeping simple babes like me 😭💅
(look at this rocket-science shit called AWS documentation… what the hell)
The @thoughtworks#TechRadar for April 2024 is worth a read - covers the movement away from #OpenSource licenses (the @osi gets a mention for their stewardship of licenses to date, and flags challenges), the rise of #AI code generation and the impacts on #CI and #CD workflows (AI code generators change workflow patterns), and #architectures for #LLM creation - we're seeing distinct patterns emerge through #RAG.
I've always enjoyed the Tech Radar format - it simplifies the complexity of a fast-changing landscape and makes it tractable for decision makers.
Mam karte #CI do tv, od #Polsat.box #Czy jest sposób na podłączenie jej do laptopu? W tv jest miejsce, gdzie trzeba ją włożyć, ale na laptopie nie. Może się nie da? #pytanie#pomoc
Continuing our #EverythingOpen Schedule Highlights, we present Faisal Masood of #AWS who will talk about the #ML life-cycle of #data preparation, model #training, testing and deployment, and the role that #automation and #monitoring tools play.
Faisal shows you how to build a model workflow where all team members can collaborate to create a #CI and delivery pipeline for ML models.
Now that @github Actions supports macOS M1 runners, we've added them to the CPython CI, and have finally promoted aarch64-apple-darwin to the top support tier!
This means CI failures block releases, cannot be merged to main or must be fixed or reverted immediately, and the whole core dev team is responsible rather than one or two.
#BetterKernelIntegration: Following our blog post from earlier this month, we have now submitted a patch to introduce kci-gitlab, a GitLab-CI pipeline for kernel testing! More here: http://col.la/kcigl
With 7.4.2, you can set COVERAGE_CORE=sysmon globally on your CI, and it'll only use it where available (Python 3.12 and 3.13 alpha), and use the default for 3.11 and older.
I've made a new #workflow which is tagging and releasing #cd built images automatically too. I can't wait for @nlnetlabs releasing a new #unbound version to watch the #magic. Or to watch it fail.
In my dev-env it works like a charm, though.
I don't want to seem arrogant but I guess this is one of the most feature-rich, secure and advanced image around. And always made with ❤️.
Neat! it will help me to detect if a merge request causes any issues, because it runs the Go tests instantly. Very useful, esp. if have not touched the project for months / years and I'm not familiar with the code base anymore 😉
Thinking about recreating my common Homelab CI pattern to use prefixed PR labels to pick available actions-jobs/ansible-task to run.
It would then let some decoupling happen from automatic digest updates with intentional actions. Not sure if that's a good or bad thing yet but it'll be something to explore. Renovate could also merge things like Git submodules and Forgejo packages without triggering Ansible host runs.
Existing solution is rigid in predefined tasks in the action job YML. Worked great to get things initially into IaC. Now the constant digest updates and small changes across so many little mini hosts is causing kind of a bit of churn on external hosts. Should work to reduce this through more mirroring and caching for quick gains.