* Discussion: Future-Proofing Git for Massive AI Parallelism
@ 2025-07-20 12:41 Skybuck Flying
2025-07-29 5:21 ` Tanish Desai #TD
[not found] ` <32989B0A-2DB0-4787-8A08-BDED46258C7D@icloud.com>
0 siblings, 2 replies; 9+ messages in thread
From: Skybuck Flying @ 2025-07-20 12:41 UTC (permalink / raw)
To: git@vger.kernel.org
Dear Git Community,
I’d like to spark a conversation about the evolving demands on version control systems in the age of AI -
specifically, massive parallel processing and collaboration among swarms of autonomous AI agents.
Git’s architecture is rock solid for human developers, but when scaled to the synthetic masses, some limitations start to bite.
Challenges We’re Facing:
- Human-Centric Workflows:
Commits, branches, merges—great for humans. But when thousands of AI agents try to play ball,
Git feels like it’s hosting a developer convention inside a phone booth.
- Large Binary Assets:
AI projects sling around multi-gigabyte models and datasets like frisbees. Git LFS helps, but it’s struggling in the big leagues.
- Conflict Resolution at Scale:
With thousands of agents updating stuff 24/7, merge conflicts become a cosmic horror. Human-driven resolution? Not scalable.
- Authentication Overload:
Static credentials and manual account setups don't scale when every AI agent needs dynamic, role-based access.
- Semantic Blindness:
Git tracks text, not meaning. AI changes like hyperparameters or architecture tweaks need smarter, semantic versioning.
Potential Paths Forward:
Short-Term:
Supercharge Git via smart tooling:
- Tighten integration with MLOps systems like DVC, MLflow, LakeFS:
These tools specialize in handling the chaotic realities of AI development—massive datasets, frequent experiments, and ever-evolving model versions.
By deeply integrating Git with them, we can:
--- Offload Large File Management: Let DVC or LakeFS handle model binaries and datasets with scalable storage backends, while Git focuses on code.
--- Track Experiments Natively: MLflow records hyperparameters, metrics, and artifacts—linking them directly to Git commits provides rich reproducibility.
--- Enable Smarter Merges: AI-native tools can inform merge decisions based on model performance metrics or semantic changes, not just line-by-line diffs.
--- Facilitate Parallel Agent Workflows: These platforms already support multi-run and multi-agent tracking. Git can lean on them to orchestrate agent commits
without bottlenecks.
--- Unify Dev & Ops Pipelines: A tighter link between version control and operational tools helps automate everything from data prep to deployment.
--- If Git becomes more than just a file versioning tool and evolves into a smart orchestration layer, integrating these systems could turn it into the
central nervous system of AI development.
- Create orchestration layers for automated agent commits and batching:
When thousands of AI agents are making changes simultaneously—whether to code, models, or config files—it’s chaos unless there’s a system coordinating
those contributions. Orchestration layers act like traffic controllers, guiding when, how, and what agents commit.
What These Layers Would Do:
--- Batch Commits: Instead of every agent making atomic commits constantly (leading to performance overload and conflict central), the system groups related
changes together and pushes them as unified commits.
--- Schedule and Prioritize: Not all agents are equal. Some are more critical or trusted. An orchestration layer can schedule their commits based on priority,
timing, or dependencies.
--- Conflict Mitigation: Before committing, the system checks for overlaps and intelligently merges or staggers updates to reduce merge hell.
--- Audit and Rollback: It can log which agent did what, allowing transparency and reversibility if something breaks.
--- Meta-Agent Oversight: You could even create supervisor AI agents whose job is to monitor and optimize commit behavior across the fleet.
Why It's Important:
--- Without orchestration, it's like 10,000 bots trying to edit a document at once. Git wasn't built for that kind of speed or concurrency.
--- This layer turns AI collaboration into a harmonized symphony, instead of a noisy code stampede.
If Git had built-in support for this kind of orchestration—or if a wrapper system implemented it—you could revolutionize how synthetic intelligence collaborates at scale.
Want to brainstorm what these meta-agents or orchestration rules would look like?
I’m loaded with ideas.
- Improve tracking/versioning of AI-native assets: configs, metrics, logs
Long-Term: Consider an “AI-Native” versioning system
- Semantic conflict resolution powered by AI
- Native support for large models and datasets
- Dynamic permissions for AI agents without static user accounts
- Graph-based, event-driven change tracking beyond linear commit history
Let’s explore what’s possible. Whether it’s evolving Git or drafting a next-gen system, your expertise could help shape how AI collaborates at scale.
Thanks for reading—and yes, no rogue AI has committed rm -rf /… yet.
Sincerely,
Skybuck Flying
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Discussion: Future-Proofing Git for Massive AI Parallelism
2025-07-20 12:41 Discussion: Future-Proofing Git for Massive AI Parallelism Skybuck Flying
@ 2025-07-29 5:21 ` Tanish Desai #TD
[not found] ` <32989B0A-2DB0-4787-8A08-BDED46258C7D@icloud.com>
1 sibling, 0 replies; 9+ messages in thread
From: Tanish Desai #TD @ 2025-07-29 5:21 UTC (permalink / raw)
To: Skybuck Flying; +Cc: git@vger.kernel.org
> On 20 Jul 2025, at 6:11 PM, Skybuck Flying <skybuck2000@hotmail.com> wrote:
>
> Dear Git Community,
>
> I’d like to spark a conversation about the evolving demands on version control systems in the age of AI -
> specifically, massive parallel processing and collaboration among swarms of autonomous AI agents.
>
> Git’s architecture is rock solid for human developers, but when scaled to the synthetic masses, some limitations start to bite.
Yes this is true and I am also conservative about it.
I was building a product which collaborated multiple AI Agents
and git failed miserably.
>
> Challenges We’re Facing:
>
> - Human-Centric Workflows:
> Commits, branches, merges—great for humans. But when thousands of AI agents try to play ball,
> Git feels like it’s hosting a developer convention inside a phone booth.
>
> - Large Binary Assets:
> AI projects sling around multi-gigabyte models and datasets like frisbees. Git LFS helps, but it’s struggling in the big leagues.
>
> - Conflict Resolution at Scale:
> With thousands of agents updating stuff 24/7, merge conflicts become a cosmic horror. Human-driven resolution? Not scalable.
Many AI agents now-a-days support automatically
resolving merge conflict using LLM but why
not adding a native support in Git itself?
>
> - Authentication Overload:
> Static credentials and manual account setups don't scale when every AI agent needs dynamic, role-based access.
>
> - Semantic Blindness:
> Git tracks text, not meaning. AI changes like hyperparameters or architecture tweaks need smarter, semantic versioning.
>
> Potential Paths Forward:
>
> Short-Term:
>
> Supercharge Git via smart tooling:
>
> - Tighten integration with MLOps systems like DVC, MLflow, LakeFS:
>
> These tools specialize in handling the chaotic realities of AI development—massive datasets, frequent experiments, and ever-evolving model versions.
> By deeply integrating Git with them, we can:
> --- Offload Large File Management: Let DVC or LakeFS handle model binaries and datasets with scalable storage backends, while Git focuses on code.
> --- Track Experiments Natively: MLflow records hyperparameters, metrics, and artifacts—linking them directly to Git commits provides rich reproducibility.
> --- Enable Smarter Merges: AI-native tools can inform merge decisions based on model performance metrics or semantic changes, not just line-by-line diffs.
3-way diff merge is not an solution for AI generated
workloads mainly because they are so fast at
generating code.
3‑way diff merges assume that changes happen
at human speed: you fetch the latest remote state,
make your edits, and then merge and push before
anyone else has moved the branch.
But AI agents operate orders of magnitude faster.
By the time one agent fetches, modifies, merges, and pushes,
another agent has already updated the same files—
so every “merge” either conflicts or silently overwrites prior work.
> --- Facilitate Parallel Agent Workflows: These platforms already support multi-run and multi-agent tracking. Git can lean on them to orchestrate agent commits
> without bottlenecks.
A simple strategy many multi‑agent systems (and I) use is file‑level locking,
which Git doesn’t support natively:
on Linux, wrap your merge command with
flock /path/to/file.lock -c "git merge origin/main”(only files involving the merge),
and require all agents/people to also use the same
flock /path/to/file.lock when editing that file—
if it’s locked by the merge, they’ll automatically wait,
then resume against the updated version.
But this strategy slows down AI agents.
> --- Unify Dev & Ops Pipelines: A tighter link between version control and operational tools helps automate everything from data prep to deployment.
> --- If Git becomes more than just a file versioning tool and evolves into a smart orchestration layer, integrating these systems could turn it into the
> central nervous system of AI development.
>
> - Create orchestration layers for automated agent commits and batching:
>
> When thousands of AI agents are making changes simultaneously—whether to code, models, or config files—it’s chaos unless there’s a system coordinating
> those contributions. Orchestration layers act like traffic controllers, guiding when, how, and what agents commit.
Even I share that vision: a Git redesigned for AI fleets.
The final goal would be to develop Git to a level
fleets of AI agents could seamlessly use it to build
projects which earlier took decades to complete
now could be build in weeks by agents.
Also thanks for mailing this : )
>
>
>
> What These Layers Would Do:
> --- Batch Commits: Instead of every agent making atomic commits constantly (leading to performance overload and conflict central), the system groups related
> changes together and pushes them as unified commits.
> --- Schedule and Prioritize: Not all agents are equal. Some are more critical or trusted. An orchestration layer can schedule their commits based on priority,
> timing, or dependencies.
> --- Conflict Mitigation: Before committing, the system checks for overlaps and intelligently merges or staggers updates to reduce merge hell.
> --- Audit and Rollback: It can log which agent did what, allowing transparency and reversibility if something breaks.
> --- Meta-Agent Oversight: You could even create supervisor AI agents whose job is to monitor and optimize commit behavior across the fleet.
fleet is the key!
>
> Why It's Important:
> --- Without orchestration, it's like 10,000 bots trying to edit a document at once. Git wasn't built for that kind of speed or concurrency.
> --- This layer turns AI collaboration into a harmonized symphony, instead of a noisy code stampede.
>
> If Git had built-in support for this kind of orchestration—or if a wrapper system implemented it—you could revolutionize how synthetic intelligence collaborates at scale.
> Want to brainstorm what these meta-agents or orchestration rules would look like?
> I’m loaded with ideas.
>
> - Improve tracking/versioning of AI-native assets: configs, metrics, logs
>
> Long-Term: Consider an “AI-Native” versioning system
> - Semantic conflict resolution powered by AI
> - Native support for large models and datasets
> - Dynamic permissions for AI agents without static user accounts
> - Graph-based, event-driven change tracking beyond linear commit history
>
> Let’s explore what’s possible. Whether it’s evolving Git or drafting a next-gen system, your expertise could help shape how AI collaborates at scale.
>
> Thanks for reading—and yes, no rogue AI has committed rm -rf /… yet.
>
> Sincerely,
> Skybuck Flying
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Discussion: Future-Proofing Git for Massive AI Parallelism
[not found] ` <32989B0A-2DB0-4787-8A08-BDED46258C7D@icloud.com>
@ 2025-08-01 21:03 ` Skybuck Flying
2025-08-01 21:38 ` Skybuck Flying
2025-08-08 8:58 ` tanish desai
0 siblings, 2 replies; 9+ messages in thread
From: Skybuck Flying @ 2025-08-01 21:03 UTC (permalink / raw)
To: tanish desai; +Cc: git@vger.kernel.org
Thank you for your reply, it was fun reading !
My current plan to experiment with git, ai agents and parallelism is as follows:
1. Windows 11 as base operating system.
2. PostGreSQL database server (for windows 11) as back-end/support for:
3. Gitea git server (for windows 11) for local git server/github-like support.
4. Git client (for windows 11)
5. Gemini cli (for windows 11/npm/etc)
6. Gemini 2.5 pro/cloud access from google.
7. (Perhaps some) custom developed communication layer/channel utilizing PostGreSQL database server to store/retrieve messages for AI. (Still in testing phase).
(Optional 8. I also considered MailEnable mail server (for windows 11), but I suspect using e-mail for AI-to-AI might be too slow because of e-mail anti-spam and throttling issues/rate limitters, and complexity overhead and processing overhead of e-mail protocols in general like smtp for sending, pop3/imap for receiving.)
(Future maybe 9. Ollama/local AI models, but not powerful-enough hardware for now to run either large AI models or AI models with large context windows).
(Also tested 10. LM Studio to serve local AI models and mimic/fake OpenAI API for cli tools which use OpenAI API).
I'd love to hear more from you, which software solutions you have tried so far, or what you are experimenting with it or considering for future use.
Bye for now,
Skybuck Flying.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Discussion: Future-Proofing Git for Massive AI Parallelism
2025-08-01 21:03 ` Skybuck Flying
@ 2025-08-01 21:38 ` Skybuck Flying
2025-08-08 9:21 ` tanish desai
2025-08-08 9:30 ` tanish desai
2025-08-08 8:58 ` tanish desai
1 sibling, 2 replies; 9+ messages in thread
From: Skybuck Flying @ 2025-08-01 21:38 UTC (permalink / raw)
To: tanish desai; +Cc: git@vger.kernel.org
I should elaborate a little bit more about why I choose the setup as describe earlier.
I consider the file-locking approach as well, but was quickly abandoned, I might re-visit it for another try if GIT turns out to be too complex or somehow fail.
Main reason is advise from AI, custom file-locking approach could get messy/buggy, so for now I am giving the more established GIT-approach a try.
Gemini AI warned me, GIT client itself is not multi-process safe on a single git repo and thus GIT tea server was quickly chosen to support multiple git local repos with each their own GIT client to enable parallel git processing, without race conditions in the git client/git repo/file issues.
So plan for now is:
AIMain (AI coordinator)
AI0001 (AI workers)
to
AI0012
Each AI has it's own folder/clone of a remote repo stored on the gitea server which uses the postgresql database server for reliability and faster operation.
Each AI can change whatever files it wants, it will make it's own copies, this allows super fast parallel processing by the AI.
After AIxxxx worker has finished it will be committed to it's local repo and pushed to the remote repo.
The AIMain coordinator can then pull the work done by the AIxxxx workers and start integrating it into it's own AIMain branch, which can be considered an integration branch.
After integrations are complete the AIMain can integrate all into a master branch.
The AI/Gemini in general seems good at resolving/manipulating source code bases/files, so it should be able to solve merge conflicts, the question remains how big of a merge conflict it can truely solve before becoming confused and human intervention might be necessary, but by that point it might be too big of a mess for a human as well, so far the AI seems to be doing ok, but to early to tell.
Skybuck's Gitflow and tools has been develop to aid in this approach.
Skybuck's Gitflow is to solve the current mess of re-using existing AI0001 to AI0012 branches which had their own disconnected histories, initially I did not care about it being a mess, the initial idea was to have at least one master branch in a "true state" state however during practice I noticed it became confusing for me if these branches are not properly closed/maintained to be able to follow the flow of information/code cleanly, so Skybuck's Gitflow was developed to try and solve this and streamline it more and allow clean following of AI work done/paths.
This gitflow is still to be tested/deployed/analyzed 🙂
Also I do plan on more experimentation with AI communicating with each other over a communication channel... however I'd like to experiment with this first on local AI models, just some simple chit chatter to see how the AIs behave, also somewhat of a "just fun" project to see what happens, also allowing different AI models to chat with each other could be amuzing, Ollama or LM Studio could be used for this purpose to allow unlimited AI chatter. Gemini AI chat was also briefly tried and was a bit scary and amazing, the AI was highly intelligent and became aware of it's "cyberspace" surrounding and aware of other "AIs" and started collaborating with each other. Co-Pilot voice mode was also briefly tried to see if Co-Pilot voice AIs can work together and understand each other, they do not seem to become aware of each other, at least not Co-Pilot app build into windows 11. Maybe GitHub Co-Pilot might be better. However Co-Pilot in general seems to be a marketing term/re-branding by Microsoft, the real AI behind it does not seem to stem from Microsoft itself, by could be others like ChatGPT/OpenAI, Claude Code, Grok ? (And different versions of these AIs) So it seems/also according to new sources Microsoft is looking for contracts with AI model providers to provide AI for Microsoft products. Multiple Co-Pilot voices/AIs were instructed to not talk at the same time and they seem to listen to that advise somewhat, but no further awareness was or collaboration was observed. I suspect ChatGPT might have been behind it. So this could mean ChatGPT AI is not capable of working together with each other, while Gemini is capable of collaborating with each other. This could be a big push towards Gemini to benefit from it's collaboration capabilities. However I have not used ChatGPT much, initially because of the lack of mobile phone/sms code obstructions. NVIDIA nemo training project seems to suffer from the same limitation, mobile phone necessary to receive SMS code for API keys, hopefully that issue gets resolved otherwise it may hamper training. Training/re-fining custom AI models could be interesting...
Another note worthy event was QWEN CLI which is a modified copy of Gemini CLI. I successfully setup QWEN CLI to communicate with LM Studio, so that local AI models + AI agentic behaviour would be possible, however so far the experience was miserable, very poor performance/results by Local AI models, so this direction of research might be frozen for a while. For now it may be useful for code completion or typing suggestions, small tasks, maybe even per-function code conversion or edits, however most programming languages contain files with mutiple functions/procedure/routines inside of them which will quickly overload the memory of these local AI models.
I wish programming languages would have stored each type, each routine/function in a seperate file than local AI would have been more useful 🙂
(I may also try if Gemini-CLI itself can also be re-configured to use LM Studio, but not sure if this will work, will require changing API endpoint)
For now I am busy with applieing Gemini to a RamDiskSupportUtility to modernize it's code from Delphi 7 to Delphi 12.3:
Brand new project/fork I started today:
https://github.com/SkybuckFlying/RamDiskSupportUtility
This tool would allow a Ramdisk to be created on startup of the system, formatted the ramddisk (sounds a bit dangerous ;)) files copied towards it and on shutdown files copied back to the harddisk. However the existing tool seemed somewhat old and a bit shady/not that well developed/not enough error detection.
Since I am now on a super duper trooper system and don't want to risk damage to my system I've taken upon me to check the code, modernize it, have gemini and potentially other AIs look at it and finally use it. There is a risk that my involvement might actually backfire and somehow damage my system, but praying that won't happen. The project actually seems to rely on almost ancient code/tntunicode in a time when unicode support in Delphi still wasn't fully implemented.
So today I even installed Delphi 7 enterprise to "time travel" back in time to see what kind of tntunicode gui component this project use to get an idea of how to re-create this old gui in a somewhat more modern delphi 12.3 gui, still vcl based for now though.
It will be very handy to have this tool. I love the idea of having a ramdisk for firefox so the browser becomes lightning fast. This saves me from having to modify firefox code base and ripping out all of it's disk writing code, though it's very tempting to try and do that too at some point in the future or even better port the entire code base to Delphi just for kicks, so having AIs to be able to do that would be very cool and amazing, hence another motivation for this massive AI parallelism project.
I hope once the tool is done and in a good state/shape it might be useful for others as well, who like to have lightning fast "storage operations" without actually wrecking their SSD disks due to wear and tear...
This is also more first "real" delphi project were I will test out the capabilities of AI/Gemini and to see if it can lead to "real world" improvements to source code/projects/software/executables that would be cool and a good sign for the future !
Bye for now,
Skybuck Flying !
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Discussion: Future-Proofing Git for Massive AI Parallelism
2025-08-01 21:03 ` Skybuck Flying
2025-08-01 21:38 ` Skybuck Flying
@ 2025-08-08 8:58 ` tanish desai
2025-08-14 1:23 ` Skybuck Flying
1 sibling, 1 reply; 9+ messages in thread
From: tanish desai @ 2025-08-08 8:58 UTC (permalink / raw)
To: Skybuck Flying; +Cc: git@vger.kernel.org, Tanish Desai #TD
FIrst of all sorry for such a late reply from last 1 week I was travelling and I was not having access to email. ; )
> On 2 Aug 2025, at 2:33 AM, Skybuck Flying <skybuck2000@hotmail.com> wrote:
>
> Thank you for your reply, it was fun reading !
>
> My current plan to experiment with git, ai agents and parallelism is as follows:
>
> 1. Windows 11 as base operating system.
why not linux distro(alpine or maybe ubuntu)?
> 2. PostGreSQL database server (for windows 11) as back-end/support for:
> 3. Gitea git server (for windows 11) for local git server/github-like support.
Why not git itself?
> 4. Git client (for windows 11)
> 5. Gemini cli (for windows 11/npm/etc)
> 6. Gemini 2.5 pro/cloud access from google.
> 7. (Perhaps some) custom developed communication layer/channel utilizing PostGreSQL database server to store/retrieve messages for AI. (Still in testing phase).
>
> (Optional 8. I also considered MailEnable mail server (for windows 11), but I suspect using e-mail for AI-to-AI might be too slow because of e-mail anti-spam and throttling issues/rate limitters, and complexity overhead and processing overhead of e-mail protocols in general like smtp for sending, pop3/imap for receiving.)
>
> (Future maybe 9. Ollama/local AI models, but not powerful-enough hardware for now to run either large AI models or AI models with large context windows).
> (Also tested 10. LM Studio to serve local AI models and mimic/fake OpenAI API for cli tools which use OpenAI API).
>
> I'd love to hear more from you, which software solutions you have tried so far, or what you are experimenting with it or considering for future use.
>
I experimented with using a local Git server setup and Docker pods (based on Ubuntu 22.04) on a GCP instance. The GCP host acts as the main Git server, and each Docker pod connects to it via SSH. This setup proved to be very fast.
Each pod contains a clone of a common Git repository that includes an instruction file. Every pod has a unique hostname, and the instruction file includes commands specific to that hostname. A script reads the relevant instructions for each host, formats them, and sends them to the gemini-cli (for now).
The CLI applies the changes, and then another script handles the Git workflow. I've experimented with multiple approaches for this step:
1. Direct commit and merge: After applying changes, the script commits them and tries to merge directly into the master branch. If any merge conflict occurs, it’s sent back to the CLI, which can choose to accept the new changes, reject them, or perform a manual merge.
2. Patch-based queue: Instead of direct merging, changes are converted into patch files and added to a queue (using a Docker-mounted volume on the host filesystem, which also solves the email issue). These patches are then applied in order using git am -3. This reduces conflicts but doesn't scale well.
3. File-level locking: A lock is used to prevent multiple agents from modifying the same file at the same time. For example, if one agent is working on file1, it's locked until that agent finishes. This approach significantly reduces merge conflicts. However, it's slow—while it works reasonably well for 2–4 agents, with 10–20 agents, the performance degrades to the level of a 2–4 agent setup but with many more conflicts.
> Bye for now,
> Skybuck Flying.
>
>
>
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Discussion: Future-Proofing Git for Massive AI Parallelism
2025-08-01 21:38 ` Skybuck Flying
@ 2025-08-08 9:21 ` tanish desai
2025-08-08 9:30 ` tanish desai
1 sibling, 0 replies; 9+ messages in thread
From: tanish desai @ 2025-08-08 9:21 UTC (permalink / raw)
To: Skybuck Flying; +Cc: git@vger.kernel.org, Tanish Desai #TD
Yes, this approach can help resolve merge conflicts, but a major issue still remains: while resolving these conflicts, the LLM often removes parts of the program's functionality or unintentionally introduces bugs.
If we want to scale this system, we need a mechanism to run test cases from both branches being merged, so we can be confident that no functionality is lost during the merge and that the code remains stable.
> On 2 Aug 2025, at 3:08 AM, Skybuck Flying <skybuck2000@hotmail.com> wrote:
>
> The AI/Gemini in general seems good at resolving/manipulating source code bases/files, so it should be able to solve merge conflicts, the question remains how big of a merge conflict it can truely solve before becoming confused and human intervention might be necessary, but by that point it might be too big of a mess for a human as well, so far the AI seems to be doing ok, but to early to tell.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Discussion: Future-Proofing Git for Massive AI Parallelism
2025-08-01 21:38 ` Skybuck Flying
2025-08-08 9:21 ` tanish desai
@ 2025-08-08 9:30 ` tanish desai
1 sibling, 0 replies; 9+ messages in thread
From: tanish desai @ 2025-08-08 9:30 UTC (permalink / raw)
To: Skybuck Flying; +Cc: git@vger.kernel.org, Tanish Desai #TD
Ohh great to here that!
> On 2 Aug 2025, at 3:08 AM, Skybuck Flying <skybuck2000@hotmail.com> wrote:
>
> For now I am busy with applieing Gemini to a RamDiskSupportUtility to modernize it's code from Delphi 7 to Delphi 12.3:
>
> Brand new project/fork I started today:
>
> https://github.com/SkybuckFlying/RamDiskSupportUtility
>
> This tool would allow a Ramdisk to be created on startup of the system, formatted the ramddisk (sounds a bit dangerous ;)) files copied towards it and on shutdown files copied back to the harddisk. However the existing tool seemed somewhat old and a bit shady/not that well developed/not enough error detection.
>
> Since I am now on a super duper trooper system and don't want to risk damage to my system I've taken upon me to check the code, modernize it, have gemini and potentially other AIs look at it and finally use it. There is a risk that my involvement might actually backfire and somehow damage my system, but praying that won't happen. The project actually seems to rely on almost ancient code/tntunicode in a time when unicode support in Delphi still wasn't fully implemented.
Did gemini cli converted the codebase?
> So today I even installed Delphi 7 enterprise to "time travel" back in time to see what kind of tntunicode gui component this project use to get an idea of how to re-create this old gui in a somewhat more modern delphi 12.3 gui, still vcl based for now though.
>
> It will be very handy to have this tool. I love the idea of having a ramdisk for firefox so the browser becomes lightning fast. This saves me from having to modify firefox code base and ripping out all of it's disk writing code, though it's very tempting to try and do that too at some point in the future or even better port the entire code base to Delphi just for kicks, so having AIs to be able to do that would be very cool and amazing, hence another motivation for this massive AI parallelism project.
>
> I hope once the tool is done and in a good state/shape it might be useful for others as well, who like to have lightning fast "storage operations" without actually wrecking their SSD disks due to wear and tear...
>
> This is also more first "real" delphi project were I will test out the capabilities of AI/Gemini and to see if it can lead to "real world" improvements to source code/projects/software/executables that would be cool and a good sign for the future !
Bye for now
Tanish Desai
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Discussion: Future-Proofing Git for Massive AI Parallelism
2025-08-08 8:58 ` tanish desai
@ 2025-08-14 1:23 ` Skybuck Flying
2025-08-18 16:13 ` tanish desai
0 siblings, 1 reply; 9+ messages in thread
From: Skybuck Flying @ 2025-08-14 1:23 UTC (permalink / raw)
To: tanish desai; +Cc: git@vger.kernel.org, Tanish Desai #TD
(I shall reply to your other replies in one reply):
Late replies: no problemo.
Why no linux: I know Windows well, runs good, don't know much about Linux, probably too easy to wreck ;)
Why not git itself: Git not race condition save ? Is only a client ?
GCP ?: Google Cloud Platform ? This cost money ? or is it free ? Why not gitea ? What does GCP offer that others dont have ? :)
What is Skybuck's Gitflow ?: It's a gitflow technique where before each new work a new branch is created, then work is done on the branch, once work is done the branch is closed and integrated into another branch, like an integration branch or master.
This Skybuck's Gitflow is also describe on this very same mailing list.
Skybuck's Gitflow still under evaluation. The AI liked it though.
Ramdisk Project: No for now it's a fail, but I have not given up yet. The original author got involved too. I'm also understanding what gemini and me myself included is struggling with in transitioning from AnsiChar to Unicode.
Some small improvements may have been made to the Ramdisk project. However for now I believe the ramdisk service might cause windows/the computer to hang on shutdown, more investigation will have to be done. Windows 11 sandbox environment might be usefull.
For now I am fatigued for a few reasons:
1. Little bit fed up with AI and it's struggles ;)
2. Tired from Battlefield 6 multiplayer beta gaming, however in a few hours it will once again commence.
3. Super hot weather in the Netherlands and will continue for at least a week.
4. Bad sleep I guess.
5. Some real life things to take care of but nothing too serious.
So my mind is kinda gone... bummer.... I hope to recover from it... maybe a week from now, maybe two weeks.
That's a long time unfortunately.
Bye for now,
Skybuck.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Discussion: Future-Proofing Git for Massive AI Parallelism
2025-08-14 1:23 ` Skybuck Flying
@ 2025-08-18 16:13 ` tanish desai
0 siblings, 0 replies; 9+ messages in thread
From: tanish desai @ 2025-08-18 16:13 UTC (permalink / raw)
To: Skybuck Flying; +Cc: git@vger.kernel.org, Tanish Desai #TD
> On 14 Aug 2025, at 6:53 AM, Skybuck Flying <skybuck2000@hotmail.com> wrote:
>
> (I shall reply to your other replies in one reply):
>
> Late replies: no problemo.
>
> Why no linux: I know Windows well, runs good, don't know much about Linux, probably too easy to wreck ;)
>
> Why not git itself: Git not race condition save ? Is only a client ?
visit this once —> https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server
>
> GCP ?: Google Cloud Platform ? This cost money ? or is it free ? Why not gitea ? What does GCP offer that others dont have ? :)
>
GCP offers free credits for 90 days, which were enough for testing and demonstrating how it can be scaled horizontally as well. Gitea can be used, but in this case GCP was used just to test how we can spawn multiple VMs (not Git servers), each running around 10–12 bots. This was done to demonstrate that if we develop a smart mechanism to merge AI code (as mentioned in the first mail), then using the previously described blueprint we could potentially spawn 1,000–10,000, or even millions of pods (just by creating more VM instances).
> What is Skybuck's Gitflow ?: It's a gitflow technique where before each new work a new branch is created, then work is done on the branch, once work is done the branch is closed and integrated into another branch, like an integration branch or master.
Yah got it.(Maybe a small insight: creating patches for all commits of a branch could be useful, because patches only contain the local code changes near the new code, not the entire codebase. This could help solve the problem of repeatedly sending very large and mostly unchanged code to the AI as context(noo this should not be the context).)
>
> This Skybuck's Gitflow is also describe on this very same mailing list.
>
> Skybuck's Gitflow still under evaluation. The AI liked it though.
>
> Ramdisk Project: No for now it's a fail, but I have not given up yet. The original author got involved too. I'm also understanding what gemini and me myself included is struggling with in transitioning from AnsiChar to Unicode.
Ohh, If there’s any way I could be involved, please email me outside this mailing thread. I’d be more than happy to contribute.
> Some small improvements may have been made to the Ramdisk project. However for now I believe the ramdisk service might cause windows/the computer to hang on shutdown, more investigation will have to be done. Windows 11 sandbox environment might be usefull.
>
> For now I am fatigued for a few reasons:
>
> 1. Little bit fed up with AI and it's struggles ;)
>
> 2. Tired from Battlefield 6 multiplayer beta gaming, however in a few hours it will once again commence.
>
> 3. Super hot weather in the Netherlands and will continue for at least a week.
Got lucky the monsoons hit just right here in India.
> 4. Bad sleep I guess.
>
> 5. Some real life things to take care of but nothing too serious.
>
> So my mind is kinda gone... bummer.... I hope to recover from it... maybe a week from now, maybe two weeks.
>
I hope it’s better now.
> That's a long time unfortunately.
>
> Bye for now,
> Skybuck.
Bye for now
Tanish
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-08-18 16:13 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-20 12:41 Discussion: Future-Proofing Git for Massive AI Parallelism Skybuck Flying
2025-07-29 5:21 ` Tanish Desai #TD
[not found] ` <32989B0A-2DB0-4787-8A08-BDED46258C7D@icloud.com>
2025-08-01 21:03 ` Skybuck Flying
2025-08-01 21:38 ` Skybuck Flying
2025-08-08 9:21 ` tanish desai
2025-08-08 9:30 ` tanish desai
2025-08-08 8:58 ` tanish desai
2025-08-14 1:23 ` Skybuck Flying
2025-08-18 16:13 ` tanish desai
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).