nibblebit

@nibblebit@programming.dev

Azure | .NET | Godot | nibble.blog

This profile is from a federated server and may be incomplete. Browse more on the original instance.

nibblebit, (edited )

Azure Artifacts has been great for private stuff. Have you tried Github packages yet? Works well enough for public packages. I don’t know what your use case is, but what prevents you from using nuget.org?

nibblebit,

ooh i haven’t heard of sleet that looks so neat.

10 publishes a day? Is that slow? I’m at a 20 man team and we run up to about 10 a month. is your final product a suite of packages or are you like only using package references in your projects?

nibblebit,

Aah yes, naming collisions. I hadn’t considered that. I’m curious what kind packages you want to make available to the public are also prone to naming collisions that can be solved by a namespace convention. Are you publishing libraries, applications, VS extensions or templates?

nibblebit,

As others have said so far. If you have zero experience what you are aiming for is pretty complicated.

  • you need path-finding. Godot nav mesh will do great. But you could implement waypoints and A* yourself if you like more control and want to learn.
  • you need some place holder models. Using prisms or Sprite3d is better because you can more easily see which way they are facing
  • you need some agent behaviour. What does move randomly but also towards the player mean? Are you thinking of a pacman like situation?. You might want to think about a state machines
  • If you want the levels to be procedurally generated you open a whole new can of worms.
  • Depending on your use case you might want to spend time getting comfortable with the UI framework and Control nodes to create buttons and widgets to create start and reset levels.
nibblebit, (edited )

Audit logs and Access control paper trails.

Security event logging has to be:

  1. Broadly accessible
  2. Write-protected
  3. offering some proof of completeness.

These three requirements are tricky and often conflicting. Block-chain might be an inefficient way to achieve these, but the glove does fit quite neatly.

Logistical paperwork

  • Purchase Orders/Invoices and packing slips
  • Waybills/Bills of lading and CMR’s

These kinds of documents require multiple stages of matching and approval by untrusted 3rd parties. There are dozens of ecosystems of interacting systems that support processing these documents, but most people still use paper. Paper is more reliable when you need to deliver a container full of diapers from Poland to North Sudan. It’s more reliable but incredibly prone to fraud and forgery. Having all of these approvals and transactions tracked on a blockchain and letting different systems interact with the same chain, would make it possible without each ERP having a rest API to each other ERP.

nibblebit,

Yeah it’s not ideal, but you only need to pay the gas cost when you need to prove integrity and that’s alot cheaper than having to constantly be in sync with the world.

nibblebit,

Yeah the problem isn’t the veracity of the logs, it’s providing a mechanism for third parties of proving that the sequence of events in your log hasn’t been tampered with after the fact

nibblebit,

Yeah you’re not wrong, that would be more efficient. Again a blockchain is not an efficient way to do it. But it would be effective.

In practice audit logs are used by and for auditors. Non-technicals that need evidence that would hold up to argument. Yes you could send your logs to a third party. Now you have to prove that third parties trustworthiness twice a year to the standards of each legal entity you operate in. And lawyers are more expensive than blockchain devs haha :p

Having a private blockchain that you can share with several changing parties that can subscribe to it. Without having to update anything about your infrastructure is a benefit.

Even though I’ve lived through several iso 27001 certifications, I’m still walking on thin ice when I say that it would probably easier to explain the blockchain in practice than any other proof of completeness method. Because the public is more aware of it. On the other hand the public is also more skeptical of crypto so it could also backfire :p

nibblebit,

Not every log needs that kind of security and a chain does not need to be public. You download blocks from peers and do your own accounting.

Nothing is preventing you from only giving access to your chain to a trusted circle of peers.

Something you could do is encrypt your logs and push them to a chain shared by a number of peers who do they same with their own keys. Now you have a pool of accountability buddies, because if someone tries to tamper with the logs, you all hang together.

If you’re doing some spooky stuff and need to prove a high degree of integrity is you could push encrypted logs to a chain. The auditor then can appoint several independent parties whose only job it is to continuously prove the integrity of your logs. After that is proven you can release your keys to the auditor who can inspect your logs knowing that they have been complete and untampered during the audit period.

Again I understand it’s not the most efficient system, but there are less efficient and less flexible systems out there in enterprise land haha

nibblebit,

This right here is really the spirit of the post. Yes there’s many impractical applications. Much like there are many impractical applications for RDBMSs, but the tech has such a stank on it, it’s important to remember it’s just a tool that can be useful despite the hype cycle.

nibblebit,

The security comes from consensus. Everyone needs to agree about what the truth is. The burden of proof is proportional to the number of peers that need to agree. Public chains require a lot of work to create consensus amongst hundreds of thousands of peers. Let’s say your chain consists of 12 companies all using the same chain to validate and verify each other’s transactions so they are ready for an audit.

Yes, it’s easier to have 12 peers conspire to manipulate the chain than to have 200 000 peers. But making 12 businesses conspire to cook the books is already several orders of magnitude more difficult than the checks and balances we have in place now.

nibblebit,

Let’s say a country mandates their Telecom sector to audit it’s transactions. The idea would be to share the network with several peers, your telecoms. In this case “mining” would be verifying the integrity if the chain and can be done by anyone of the peers. The government or auditing authority could also be a peer in the network and they are all capable of verifying the integrity of the chain through “mining”. You are right that it’s easier to have a small group of peers conspire to manipulate the chain. But it’s a lot harder for several telecoms to conspire than for one rogue CFO to cook the books.

In this application you’re not generating ‘valuable’ tokens in the sense bitcoin does it, but the value is the integrity of the chain. People value the proof that no one has redacted or injected any transactions.

nibblebit,

Most auditing and insurance companies don’t have a webhook where you can arbitrarily send your logs to. They have humans with eyes and fingers holding risk management and law degrees called auditors. That you need to, with words and arguments,convince of your process integrity. And What happens if you switch insurer or certifier? You probably have to do a ton of IT work to change the format and destination of your logs. And how do you prove that your process was not manipulated during the transition?

What you describe are digital notary services and it’s billion-dollar industry. All they do is be a trusted third party that records process integrity. IAM, change logs, RFCs, financial transactions, incident detection, and response are all sent in real time so you are ready for certification or M&A. Most small and mid-sized enterprises can’t afford that kind of service and are often locked out of certain certifications or insurances or take a huge price cut when acquired.

Something like pooling together resources to a provable immutable log trail isn’t unreasonable.

nibblebit,

I’m sure the hardcore variant would have its uses. But the goal isn’t necessarily to make fraud impossible, just evident. So probably more towards the latter option. And you are correct that you don’t need a blockchain to create a distributed database that enforces consensus. It’s just a neat tool you could use that scales pretty well, is relatively low maintenance(SWE hours not GPU hours), can adapt to a lot of cases, and is affordable for small and mid-sized companies. You could do the same by broadcasting your events to all your peers and having each peer save everyone’s events to compare notes later. But this would be a hassle to setup and keep consistent.

nibblebit,

I mean you would need the hashing and consensus stuff to figure out exactly how the chain diverged. Just pooling the event would in theory be enough to prove that shenanigans were afoot then the ledgers don’t align, but that’s a bit too brittle to base a bi-annual evaluation on. You could close those up and setup some eventual-consistency across peers, sure but now you’re talking about a some complicated proprietary software. It’s also not clear how a system like that would scale.

There’s plenty of convenient self-hosted blockchain solutions out there already that can be used to accomplish this. And there are a ton of tools to do analysis and tracing on these chains. This makes it not unreasonable when compared to a dedicated solution.

nibblebit,

Sorry, it was not my intention to be vague. I admit to not having a complete implementation in mind. My point is that linking each log as a block in a chain with hashes forces an order that is more difficult to tamper with than a timestamp or auto incremented integer id. You have to alter more data to inject or purge records from a chain than you would with a table of timestamped records. I admit I can’t make my case better than that.

As for the simplicity factor. I think your suggestion of serving logs to peers from a server like an RSS feed is a fine solution.

But I can setup a MultiChain instance In a few hours and start issuing tokens. I can send the same link out to my peers and auditors for them to connect and propagate the shared state. The community can shrink and grow without the members having to change anything. Now it’s mostly a hands off venture that scales relatively well. I’m an okay programmer but to coordinate an effort to build, test and verify a system to do the same with RSS feeds across multiple companies would take me months. Something like MultiChain or HyperLedger is comparatively turnkey.

I’m not here to say this is the best way to do it. I’m just saying there’s some merit to leveraging these technologies.

If you ask me, audit logs should just be posted to Twitter, the only true write-only database.

nibblebit,

Sorry, I didn’t mean to be dismissive. I wholeheartedly agree with you. What I meant was that it’s a shame I, as an engineer in the year 2023, would have a hard time pitching a blockchain solution to a non-crypto problem to paying customers no matter how fitting the solution might be. I don’t think that’s very disputable. Now this attitude is entirely driven by the last decade of unsubstantiated crypto hype and associated bad faith actors. It has nothing to do with the technology as it is.

nibblebit,

Man, I have to agree. Your write up reflect my experience with Azure Functions in a mid-large sized application way more than the post. Fantastic

nibblebit,

Hey, I’ve worked with ML.NET before and it’s not the best framework for C#, but it is capable. I’m having trouble understanding what the goal of your model would be. Is it just text prediction or classification? ML.NET and pretty much any ML framework do need some experience with machine learning methods and models to achieve good results. Is this your first time doing something with ML?

I would look into the SciSharp stack a great collection of AI/ML framework bindings for common frameworks I would recommend looking into Torch.Net and Keras in particular.

nibblebit,

It depends… The myriad of reasons to have a dedicated release day have often to do with synchronizing marketing, support and the other departments.

My question is what does QA mean for your org? Does it mean defect detection? Testing? Acceptance? Those are all different things. The teams i see that are able to release every day have a strict separation of Quality Control and Functional Acceptance. QC used to detect defects and regression and is handled by highly automated processes accounted for by engineering. Then acceptance is done by a dedicated product/quality team that figure out if the new functionality actually is built to spec and solves the customer problems. This also involves blogs, documentation, customer contact, release notes, tutorials and workshop for the support team etc… This second part is handled by feature flagging, so that the product teams can bèta test, run a limited release and track adoption.

It really depends on what kind of software youre running and what your relationship is towards the end user and the rest of the org. Something that is the same in all cases is that your requirements and acceptance criteria need to be very clear from the start and regression resting needs to be fully automated.

nibblebit,

Explain to me how this isn’t code golfing.

nibblebit,

This is a bit of a narrow view of a very vague term. Having worked with many different sizes of organisations i can say that the responsibilities of whomever is labelled CTO are completely arbitrary. The only thing you can establish is that they are the person accountable for the technology decisions.

Sometimes that’s a legacy developer, sometimes that’s the first sys-admin.

Sometimes it’s the VP of engineering.

Sometimes that’s the person that maintains the best relationships with software vendors.

Sometimes it’s the person that was hired externally to explain the tech to the CEO and let’s them make informed executive decisions.

Sometimes it’s just a public figure used to promote the org and maybe do DevRel.

Sometimes it’s the Architect that designed the ecosystem.

Sometimes it’s the ancient programmer that has kidnapped the entire codebase so that no-one else can sanely work on it.

Sometimes it’s a six sigma type that setup the ticketing system, PRs and the release process.

At any size, the CTO is whatever the org needs him to be at that point.

nibblebit,

Every engine is going to come with engine specific problems. You will also come against many general game development problems, for which the engines have come up with many different creative solutions.

I can’t make it any simpler for you. You will waste a bunch of time learning stuff. The only way to avoid that is literally building your own engine that conforms to your expectations and assumptions, because noone else can do that.

There are so many invisible boring-ish problems. Ui, scaling, networking, instancing, level changing, loading screens, even scheduling etc. You need to learn to love the boring stuff, because it comes at a 10-1 ratio towards the fun-ish creative problems.

However it’s best to start wasting that time today than next week.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • ngwrru68w68
  • Durango
  • mdbf
  • magazineikmin
  • everett
  • thenastyranch
  • rosin
  • Youngstown
  • slotface
  • khanakhh
  • osvaldo12
  • megavids
  • GTA5RPClips
  • tacticalgear
  • cubers
  • modclub
  • tester
  • InstantRegret
  • ethstaker
  • cisconetworking
  • anitta
  • provamag3
  • Leos
  • normalnudes
  • lostlight
  • All magazines