> In Git, commit is an overloaded concept ... Grace simplifies this by breaking these usages out into their own gestures and events: ...
To me, now having two different commands which I need to distinguish means actually some mental overhead. I need to think about, is this really ready now, or only partially ready? Often I don't really know. I keep rebasing and editing my history of commits until I think they are in a good shape, and then I publish (make a PR).
grace promote seems easy to distinguish, if this is only intended for merges. Although I also don't see why it is such a problem to consider this also as a type of commit. Also, how are conflicts handled there? In Git, a commit always comes with a well defined state of the file tree, with one, or multiple parent commits (or also zero for the initial).
> Every save is uploaded, automatically
How does this handle files which are not yet tracked? Or all new files are automatically added? With Git, I always check first what files are added before I commit some new directory, to make sure I don't add files which don't belong into Git, like generated binaries or so.
> You can fix [the merge conflict] while you're in flow, and skip the conflict later.
To me, suddenly getting a merge conflict because of the background auto-rebase while working on some code sounds more like a distraction to get me out of the flow?
> Personal branches, not forks
So, you mean it's a centralized system? But does that mean, forks are not possible? Or when you fork, there are no good ways to work together anymore, e.g. merging branches from other forks?
> Grace will have a native GUI app for Windows, Mac, Android, and iOS. (And probably Linux.)
It sounds like Linux is not a priority now? This will turn off a lot of developers. I think for a new version control system, it makes more sense to have Linux as a priority, and Windows etc can come later.
The "probably Linux" line was when I was thinking of using .NET MAUI for the GUI, and they were considering adding Linux support. My bad, I need to change that.
.NET MAUI has decided not to do that, so I'm going to build the GUI with Avalonia, which does have Windows / MacOS / Android / iOS and Linux desktop support, as well as WASM support, all from the same code base.
If I weren't using a cross-platform GUI framework like Avalonia, I really might not bother with a Linux GUI, or at least deprioritize it; I expect most Linux devs will just use the CLI. I'm totally curious how much GUI usage there will be on Linux...
One data point: I'm a Linux user and I prefer GUIs for fiddly things like version control, although in some cases I end up using the terminal anyways because all the GUI options are terrible.
I'm a Windows user (and former Windows network administrator). I only use Linux as a target for containers, and rarely use a Linux terminal, so I know I'm not the right person to understand what Linux users want and need. I was a Unix admin in the late '90's for a couple of years, it was a bad enough experience to make me not want to be back in that world ever again. It's all subjective, of course.
I am pretty decent at designing UI's, and I hope Grace will be a non-terrible GUI for you (and for me).
I use PyCharm and I also prefer the builtin Git GUI over the command line, at least when also working within PyCharm. I use PyCharm mostly on Linux and Mac.
Yeah always tracking and uploading everything seems nice but brings with it security concerns. For example, if I generate a secret key or store a private keyfile into the repo before remembering to gitignore... err I mean graceignore it, then that key gets uploaded to the cloud. Oops.
In Git these mistakes are more easily avoided as you are deliberate about what you commit and what stays local.
> In Git these mistakes are more easily avoided as you are deliberate about what you commit and what stays local.
And yet GitHub has built an entire security feature - Secret Scanning - because developers do not easily avoid checking in secrets.
We have to face the fact that Git not being able to delete versions easily is a bug, not a feature, and that we do indeed sometimes need to delete versions from a repo. And so we've built a set of workarounds for Git to prevent pushes from succeeding when secrets have already been committed locally. It's not ideal.
Grace will enable a combination of hoster-level Secret Scanning with a native ability to delete a version that you don't want. Imagine that you accidentally save a secret, it ends up in your personal branch as a Save reference, Secret Scanning catches it and prompts you about it: "A secret was detected. Should I delete that version for you?"
No rewriting, no "hey Copilot how do I fix my repo after I committed a secret?", just one click and it's gone.
It appears that this solves exactly no problems which I have, while introducing a slew of problems I don't have currently and am grateful not to.
I know this comment risks falling to the low-brow dismissal bucket, but it's hard for me to say anything more cogent than that. It all seems so obvious? Like, Kubernetes, your choice of cloud provider, "who works offline anyway"...
I got used to real-time version tracking when I was working on a book. The publisher insisted that the working files are kept in the Box folder (like a DropBox, or any other cloud-shmoud synchronized storage). This was... convenient and practical. This way of working is much better than the Git way if you work alone and produce dozens or hundreds of meaningful changes a day.
But at some point you have to publish. You have to make sure your state is correct to some level of certainty, and share it with other people. So you still need some kind of 'commit' thingie.
I think, Grace gets this both ways right. Personally, I just keep GitHub repos in a DropBox folder, but if Grace catches on, I might consider hoping in.
Thank you. One of the original, first-night-dreaming-about-it inspirations for Grace was the OneDrive sync client; since they rewrote it back in like 2018 or whatever it's been so rock-solid, and it's ambient.
I wanted that experience for version control; make as much happen in the background as possible, and make it fast when you need to interact with it.
OneDrive sync and "rock-solid" in the same sentence? I beg to differ.
Frequent conflicts due to non-synced files, the client crashing, randomly failing auth, files that are indicated to be recent and in sync but really aren't, no proper debuggability, no insight into why things don't work if they (frequently) don't. I could go on. OneDrive is a very bad thing to reference in this context.
That hasn't been my experience for at least six years. I think I've had one weird sync issue with OneDrive, like a couple of years ago... I forget, but it just works, across all of my computers, in near-real-time.
I think probably the big difference is "if you work alone". You have to go to extra effort with Git because you need your changes to be understandable and usable by other people.
I dunno if this is the right thing for team based coding projects. I think we just need something like Git with:
* A saner CLI
* Proper support for large files
* Proper support for submodules
* Better conflict resolution
I think Pijul goes some way towards some of those, especially the last one, and maybe submodules.
I am also hopeful for Pijul. But yes, the "if you work alone" backfires for it too. If I work alone, I can adopt Pijul in my work easily, but I don't really want it. If I work within a group then I do want a better VCS, but I should convince the majority of people I work with to switch to the new thing, so essentially I can't switch to Pijul unless there is an undoubtfully good reason to do so. And incremental gains are not a good reason. A "saner" and "better" isn't enough.
On the other hand, we all work alone in between commits (in a broad sense of the term), so having a second tier of a version control, a personal tier, may be a gamechanger.
There have been lots of projects over the last 10+ years that have been riffs on "It's Git, but <better in this way>", but none of them have really caught on beyond a small, admittedly passionate group.
I believe that's because once you go through the pain of learning Git, the idea of learning yet another distributed version control system or client seems so unappealing to most people that they'll never do it.
The truth is: if you're using GitHub or GitLab or some other hoster, you're already doing centralized version control; you're just doing it with a confusing distributed version control UX. I'm just trying to skip to the part where we just admit it and call it centralized and build something that makes that experience awesome.
When small trends change quickly - like, jQuery -> Ember -> Aurelia -> Angular -> React - the differences between them can be small.
When major trends change slowly, over decades - like relational -> map-reduce -> key-value -> document db -> graph db - the differences between them are usually much greater, and even if an older product adopts some piece of doing it the new way (think, SQL Server supports JSON storage well enough, but really you want to use a Document DB for that) it's a bolt-on or it doesn't quite fit with the design intent of the system.
Replacing Git is one of those large trend changes, and I don't believe that any "Git, but <better>" product will ever make that change happen. I think it's time for something really different, and a pendulum swing back to centralized, but with modern, fast cloud-native features and infrastructure, ready for high-scale monorepos and large file storage and effectively infinite repo sizes is the way to go.
I don't think there's a good, not-confusing way to build a distributed version control system. Others disagree, that's cool, and there are other wonderful version control projects like Pijul [1] and jj [2] and JamHub [3] and others who are trying to make a better distributed VCS. I know some of the people working on those projects, they're awesome, they care about doing it well, and I wish them all the luck in the world.
Something will catch on and replace Git, soon. Grace is my offering to that.
> There have been lots of projects over the last 10+ years that have been riffs on "It's Git, but <better in this way>", but none of them have really caught on beyond a small, admittedly passionate group.
I don't think so. The only ones I know of are Pijul and Jujitsu which you mentioned. They're both quite new.
> you're already doing centralized version control; you're just doing it with a confusing distributed version control UX
Sort of... But actually, as soon as you go offline it's distributed.
Anyway I think more alternatives is always better and lots of the issues you listed in the readme definitely need solving, so good luck!
Do you ever need to bisect regressions in a writing project? Do you need to maintain multiple releases and cherry pick patches across? Or does the latest version always supersede the previous? It seems like a far easier problem than what git is designed for.
If they're used by the software and aren't deterministically producible by the code, then I don't really understand how anyone could say that they don't belong.
Hi! Which one are you working on? (I created Grace.)
I'd love to chat... I've spoken to the creators of several other VCS's, it's a nice group of people. Hit me up on Twitter or whatever, I'd love to hear what motivated you, and what you're solving for.
You don't know about it because it is far behind Grace.
One of my motivations is hatred of Git, but unfortunately, I hate CMake more, so I wrote a build system first that I literally just released a month ago. [1]
So I am rightfully unknown to other VCS creators.
Also, I hate Git despite being one of the 20% that know 80%. I have done things with Git that no one should do. [2] [3]
Another of my motivations is vendor lock-in. Yes, Git is decentralized, but it doesn't include everything people want for repos, such as issue trackers. These things are what GitHub, GitLab, and others use to lock users in. Mine will follow Fossil and peovide everything, so users can switch providers easily.
So I strongly disagree with your cloud-first design.
My third motivation is binary files. Why don't we track them better? I mean, yeah, xdelta is cool, but we could do so much better.
I want to track changes to binary files the way we do with text. [1]
For example, if you have a PNG texture, my VCS will track changes at the pixel level. If you have a Blender file, it will track changes at the item level, from meshes to materials and vertices, and everything in-between.
Yes, it will be a lot of manual work to implement each file type, but that's how I plan to make money.
[1]: Only files that use lossless compression, of course.
Interesting idea. I assume with quite a number of technical difficulties (even despite the manual work for each file type).
However, I don't think making money from this will work. A new version control system must be 100% open source. Otherwise it will not be an option for most people. Even when only selling extensions for certain file types and keeping the core open source, I don't think this will work, as you keep away the main interesting feature then.
As a games industry (where Perforce is by far the most common VCS in a team of any reasonably significant size) person, we absolutely do not give a crap about source-available or otherwise. We want tools that work, even if we have to pay for them.
None of us have time to delve into the workings of a source control system to troubleshoot problems instead of working on our .. work.
Sorry if it sounds harsh as that is not my intent. My suggestion is you spend much more time on making sure your system is reliable and has proper support (even if paid) instead. Bonus points for easy GUI interfaces that artists and designers can use without shooting themselves (and their team!) in the foot.
No one loves Perforce, but it works well enough and nothing else come close to it for the games industry.
Ideally, you want a self-hosted mode (no cloud-only) and a native UI. Even if as stretch goals. We don't like our source, and more importantly assets, hosted outside our control and native UIs are just better.
Diversion is already targeting users of Perforce and game developers storing large binary files. Diversion is currently closed source, but may open source their platform in the future. https://news.ycombinator.com/item?id=39088551
Isn't that "simply" a task for the diff viewer? - VCSs like git treat all files like objects (while storage does some deduplication) and only few things imply textual content, and I wouldn't want an automated merge on an image or 3D scene anyways (while in rare cases this might work, I guess more often it would produce more trouble than where it helps)
Properly tracking changes would allow less disk space to be used.
xdelta is not supposed to be used on compressed data [1], but my VCS will handle that transparently using less space.
You are right to be nervous about auto merges, and that will be the biggest work when implementing support for a file type.
But I think it is possible in many cases.
For example, Blender files store everything in one file. You can reference things between files, and people do that, but often, you will still want to store the material for a mesh with the mesh.
If someone changes the mesh, and someone else changes a material, those changes should automerge.
One of the (many) major improvements of git over CVS was that CVS stored diffs., while git (not on lowest, but practicalblaxer) stores the full files.
Depending on diffs made it hard to jump between versions (need to calculate the intermediate steps for any operation) while tracking full files made it possible that git could track changes even over files, thus in some cases it can even merge when code is moved between files.
And for the merge: Even the "simple" merge you describe will likely lead to issues when a designer worked on improving surface to make a scene look "right" while somebody else changes the scene and the merge then goes somewhat unseen. And for that kind of stuff you don't have any unit tests as at least a safety net, as you have with code ("it compiles and tests pass - the merge didn't break anything fundamental")
About diffs, that is true, but my VCS does not use diffs.
And I am aware of the combination problem, and there will be the option to lock things.
But when it comes to visual things, someone does have to stare at it. If you watch https://youtube.com/watch?v=iZre2MUyvoQ , the thing you should notice is that John Lasseter is constantly checking the integration of separate parts.
A director might tell a modeler and a designer to work on the same scene at the same time, as you said. In that case, I kinda expect the director to check the integration.
Binary files are harder because there is no integration test, yes. But integration tests don't catch everything in code either, and people still want automatic merges.
Thank you! I read your comment yesterday and forgot about it, then today morning, suddenly realized I needed to version control a bunch of sound files (which aren’t getting pushed up to any “cloud”) and remembered this. Will check it out.
This is a mixed bag to me, some of the features sound very cool. I like automatic pulling, doing away with stashing, splitting commits into checkpoints and merges, optional locking and supporting large binary files makes it super adaptable to non-programming workloads.
I don't think uploading on save is a wonderful idea for many reasons mentioned here. I don't agree with replacing forking with branching because forking is not only done to contribute to the mainline project. I'm against stuffing AI where solves no problem.
I also get the feeling that if Grace goes anywhere it's going to be commercialized. It's practically screaming paid service.
Not with something like GitHub Secret Scanning monitoring things, or we could imagine a local ML model automatically checking every save before it gets uploaded.
This is an easily-solved problem. And in case one slips through, versions are easy to delete in Grace.
I have many other things that aren't secrets, which I still do not want to be uploaded.
Don't get me wrong, many concepts are great (such as the watch/auto rebase). But I still would base everything on top of git. Call it the network effect or whatever, but every nice concept you promote could be done with a git wrapper. Version repos are a solved problem, and git is so much a de facto standard that fighting it will be ... interesting.
> or we could imagine a local ML model automatically checking every save before it gets uploaded.
It's so 2024 to not actually engineer things but say (only say) "we tackle it with some ML model". I have goosebumps from a though that VCS would require GB data file for the model or require (let be honest here, 99% more possible, nobody creates own models) an online chatgpt access.
> This is a mixed bag to me, some of the features sound very cool. I like automatic pulling, doing away with stashing, splitting commits into checkpoints and merges, optional locking and supporting large binary files makes it super adaptable to non-programming workloads.
Thank you, I really am trying to write something that meets the needs of developers in the late 2020's.
> I don't agree with replacing forking with branching because forking is not only done to contribute to the mainline project.
That's an interesting point.
I'll use "GitHub" below as a stand-in for "Git hoster", but they invented the fork, so, you know...
I start with the idea that forks are not a Git feature; they're a GitHub feature. They're bolted on to Git to enable open-source dev in lieu of opening up the main project to write access from everyone. You write to your own fork, which you have write access to, and then ask someone with write access on the main project (through a PR, usually, also a GitHub feature) to take your contribution and write it to the main project.
All of this is yet-another workaround that we're used to because of Git, but that doesn't mean that it's not a workaround, or that it's the best way to do it.
My design intent with personal branches instead of forks is to say: if you want to make a copy and go forth and work on it totally separately from the main project, cool, go ahead. But if you want to contribute to an open-source project, or have your own tweaks on it but still keep up-to-date with `main`, well, I'm writing a whole new VCS, let's rethink this part. Let's acknowledge that there's an important use case that forks have been the answer to so far, but that we can deliver using personal branches. Authorization in Grace won't just be at the repo-level; it will be at the branch level, too.
So you'll be able to create personal branches that you can write to, even if you can't write to `main` on an open-source project. And you'll be able to create PR's and have that code promoted by someone who does have access.
> I also get the feeling that if Grace goes anywhere it's going to be commercialized. It's practically screaming paid service.
It's 100% meant to be easily adopted by the large hosters and offered as a new, web-scale version control system.
I expect Grace to be offered for free, the way that Git currently is, for personal accounts. The big hosters don't compete on the version control level, they compete on the services above that (project management, issues, CI/CD, security, etc.) and that shouldn't change whenever the thing that replaces Git catches on.
And it's also designed to meet the needs of enterprise customers. I've been an enterprise developer for much of my career, so I have some idea of what that requires. There's no way that a replacement for Git can be successful without addressing enterprise needs, and since we're building from scratch, we don't have to bolt that on anymore.
Because of that, I don't see a way to use venture capital to create "VCS vNext" to go after Git. The win for a new VCS is going to be adoption by the big hosters, not the creation of a startup that would have to also build All The Things (project management, issues, CI/CD, security, etc.) just to have a shot.
100% open-source, built in the open, no way around it as far as I can see.
Things like GitHub's Secret Scanning will still be there to check...
And if a secret ends up on your personal branch, you can delete the Save or Checkpoint or Commit and it'll disappear; no "rewrite history". Deleting versions is a native, expected use case in Grace.
Git not being able to delete versions is a problem that's so bad that we've had to build massive systems (i.e. Secret Scanning) as a workaround. Grace, being built now, takes it for granted that versions need to disappear sometimes.
In Grace, saves and checkpoints and, like Git, branches themselves, are ephemeral, they're meant to be created and deleted. Saves and checkpoints will be deleted automatically, with repo-level timeout settings.
> Things like GitHub's Secret Scanning will still be there to check...
When a DVCS is successful, there are many server software. For git, there is the official git daemon and its wrappers (gitolite...), but also many forges, some of them having many local instances (Giltab, Gitea...). Grace should not depend on an optional (and error prone) feature of the server.
> Git not being able to delete versions
I don't understand. There is no concept of "version" in Git. You can delete commits, references and objects. For instance, you can delete/update a commit in your local branch. The old commit is still reachable, but you can run the garbage collector to delete it. You can also force a push that delete commits on a distant repository.
As far as I know, the problem with deleting published secrets on public forges is that as soon as anything is published bots will copy it. And some of these bots are ill intentioned.
My guess is that it's going to end up a clusterfuck of everything and a kitchen sink built on ill-defined murky concepts with an easy UI bolted on top. Just the opposite of git.
You don't seem to care much about simplicity, you care about shiny features.
Could you update the project's README.md to say right on the first paragraph objectively what Grace is and what are it's main selling point, followed by a example of Grace's happy path?
The document as it stands just presents a list of vague buzzwords that are irrelevant for a VCS (cloud-native?!) and even after scrolling half way through the doc the reader still has no good clue what he is reading about, let alone why they should bother with Grace.
A paragraph with an objective description would be helpful, followed by a small example presenting Grace's happy path. All the marketingspeak just gets in the way.
Disagree. "cloud-native" to me sounds offputting. One of the main selling points for me of git over the likes of SVN is the capability to work offline, in restricted networks and through non-http transports like mail.
"Cloud-native", to me, means "built to scale up well". I find that's the connotation that most people associate with it.
Git, or any file-server based software, is not built to scale up well in today's world. Large Git hosters have to invest entire teams to manage their file servers and their Git front-end systems to create a web-scale service on top of a file-server based piece of software. I'm just skipping to the part where you don't need that anymore because Azure / GCP / AWS PaaS services already handle that.
And, in any team dev situation, you're not getting anywhere until you `git push`, and that requires an internet connection. Assuming ~100% connectivity for devs around the world, in the late 2020's, is the right assumption to make. If offline is a hard requirement, Git isn't going anywhere.
> through non-http transports like mail
yeah, I'm not building a new VCS for that 0.0000001% case.
> Git, or any file-server based software, is not built to scale up well in today's world. Large Git hosters have to invest entire teams to manage their file servers and their Git front-end systems to create a web-scale service on top of a file-server based piece of software. I'm just skipping to the part where you don't need that anymore because Azure / GCP / AWS PaaS services already handle that.
This doesn't really make any sense. Most people are not "large Git hosters" (and so for them there is no functional difference between "outsourcing Git hosting" and "outsourcing to a Grace hoster that is outsourcing file handling", and even for those who are large Git hosters, they're still going to need a team of sysadm- sorry, "cloud experts" to manage the AWS/Azure/whatever infrastructure.
What actual material benefit is being provided here? It seems to me like it just trades "administrating a standard hosting environment" in for "administrating a vendor-locked hosting environment".
> This doesn't really make any sense. Most people are not "large Git hosters"
I do work for GitHub, so I do know what it takes.
Most people don't run their own Git servers, they use GitHub / GitLab / Azure DevOps / etc. and I intend to create something that's easy for those hosters to adopt.
Grace is also designed to be easily deployable to your local or virtual server environment using Kubernetes - and if you're large enough to want your own version control server, you're already running Kubernetes somewhere - so, party on if you want to do that, but I expect the number of organizations running their own version control servers to be low and shrinking over time.
And Git isn't going anywhere. If that's what you to run on your own server and client, I won't stop you.
Whoa, the line "Every save is uploaded, automatically [by default]" needs to qualify the non-default options. Can a company policy demand that? I hope there is a protocol-level, client-side opt-out for that, otherwise this VCS will work for the company first, and the dev is an afterthought.
"Upload every keystroke" is a huge no, and is going to be abused by companies looking for some performance metric to apply to devs (cloud IDEs are going to lead to that as well). Or things will revert to pre-git workflows where a huge number of files will remain open/changed until the final submit/push.
My workflow is to do the `grace checkpoint` equivalent and only make it public once it is presentable (and won't waste other peoples time when looking over or reviewing it). I never ever want these personal checkpoints/commits anywhere else. Mercurial/hg initially also had no easy way to have and clean up local-only commits, so for me and many others git it was.
> Whoa, the line "Every save is uploaded, automatically [by default]" needs to qualify the non-default options. Can a company policy demand that? I hope there is a protocol-level, client-side opt-out for that, otherwise this VCS will work for the company first, and the dev is an afterthought
I'm not so worried about the company. If they want every save uploaded, there are other ways to accomplish that, and they will have their upload. As a developer, I do not like "every save" saved because many editors trigger reformats on saves and such. Who cares about pre-reformatted code? That makes for junk history being pulled into VCS in a way that will eventually make the VCS data garbage. This is a feature that works for some workflows, but I suspect will be not so great for the team.
I don't think that works for the company or the dev. It turns history into a giant steaming pile of garbage instead of a series of meaningful changes. History will be riddled with invalid state: Code that won't even compile, code that compiles but just had something important deleted without yet being replaced, etc.
> History will be riddled with invalid state: Code that won't even compile, code that compiles but just had something important deleted without yet being replaced, etc.
No, it won't.
Saves are ephemeral; they're for your personal use to look back at the changes you've made recently, to enable a time-limited file-level undo, and to help you get back into flow after an interruption by being able to review what you were thinking and working on. Saves will be automatically deleted after a repository-level settable length of time, I'm thinking 7 days by default, but we'll see what makes sense.
Checkpoints are also ephemeral, they'll just have a longer life before getting deleted. They're for your reference, to help you keep track of your work, or keep track of an interesting version, or whatever you want to use them for. Or don't. Up to you. I don't imagine caring, for instance, what versions you checkpointed nine months ago.
This eliminates the "squash vs. no squash" debate. The only references that get to `main` are promotions. Nothing to squash.
All of this makes version control more ambient, more something that just happens in the background, effortlessly. Once you try it, it's really nice. Obviously, I've been the first beneficiary of it.
I wouldn't want to have to explicitly "push" changes to my OneDrive files, and in the same way, I don't want to have to explicitly "push" changes on my own branch anywhere, that should just work.
You're not alone. My experience is that about 20% of devs deeply understand Git and are very comfortable with it, and about 80% know the the basics and hope nothing goes wrong.
I'm somewhere in the middle myself. That's part of why I designed Grace to be much easier to understand. I can teach it to you in about 15 minutes, not the days and weeks it takes all of us to feel like we understand Git.
> To be fair Ive been using for git for 8 years and I still dont quite understand it beyond the basics.
I don't see why this is supposed to be a problem. What constitutes "the basics" is what you use in your everyday routine, and if that works perfectly well then there is absolutely no need to do something you never need to do.
while I don't judge someone for not knowing past the basics of git because, as you point out, if it works, it works, the very valid fear is that they'll somehow get into a funky state and have to find a git expert to fix it for them, or painfully muddle through it, with the very real fear that their work will get lost somehow. if you know what you're doing, that doesn't happen, but if you're not an expert, it's a very real thing that can happen, so it's that fear that constitutes a problem for some.
it's this black box that saves all my hard work, and if it accidentally hit the wrong button, it'll delete all my data and find my kids and scare them as well.
I was fortunate enough to dive deep into git professionally so I'm good enough with it to get myself out of trouble, but watching others use it, I can understand their worry.
Its gotten me along so I havent bothered, but occasionally I will fall into a mess and I find some improper/inefficient ways around it. Every time I try interwctive rebase I get into a huge mess where it cant apply some updates for some reason and I say f it and just do git reset hard and aoply the commits I want and force push.
> Not a problem I thought I needed to solve, but okay. Also this means I can't easily run it locally?
I would add that this sounds like a big step backwards, as it conveys the idea of a svn-like version control system designed for the service provider to hold your project hostage.
> I already understand git, so does everyone on my team, and everyone that interviews...
Really? Because every team I've ever met could use git but the moment anything left the golden path they had to either 1. delete everything, reclone, and manually fix things up, or 2. turn to the one greybeard who actually did understand git. Either your team is the 99th percentile, or your definition of "understand" is rather generous.
> Really? Because every team I've ever met could use git but the moment anything left the golden path they had to either (...)
I've been using Git for over a decade and I never had the need to "delete everything, reclone".
The only time I screwed up a Git repo was when I was experimenting with storing Git repos in USB pens and one of them got corrupted. I have no idea what might lead anyone to screw up a Git repo, because that's simply unrealistic.
I don't think this is a good example. Forcing a push means that the repository will lose commits, but you still keep yours in your local branch. This means the repo is not broken, but at best you have a perfectly valid local repository that just happens to be out of sync.
If you rename your local branch and set it to not track the remote one, and afterwards you fetch changes from the remote branch, then you're done.
It's not meant to be a local version control system, unless you enjoy running local Kubernetes clusters (which I have to do, but don't enjoy).
It's meant to be the next big thing in version control - no reason not to go for it - which means that it would have to be picked up by the major source control hosters, and since I know what it takes for GitHub to run its infrastructure, I know that it makes much more sense to build something new on PaaS services, not on file servers. Not anymore.
> I already understand git, so does everyone on my team, and everyone that interviews...
Yeah, but do they? That's not my experience, and it's not the experience of most people I talk to about it. Most devs I've asked about it understand the basics of how to use Git, but they're still afraid of it if anything goes wrong. My guess is that the ratio is 20% deeply understand it, and 80% only know what they need to and hope nothing bad happens.
Maybe your team are all a bunch of reflog wizards... that's awesome. And uncommon.
And I almost always get laughs and head nods when I talk about the problems with Git's UX.
> Is large files the main problem this solves?
No, but it's a big problem for gaming companies, who are mostly stuck on Perforce. And Git can't handle them well without the bolted-on LFS. And with the rise in monorepos, more and more enterprises want to be able to store more and bigger files than ever before.
> And maybe requires an internet connection?
Yes, absolutely, it does. So does Git if you expect to push anything anywhere. And if you happen to be doing dev using Azure or GCP or AWS you need one too.
Building something that would become popular in the late 2020's, and assuming that users will have solid Internet connections (don't forget satellite) is what makes sense. If you're still in a situation where you need offline VCS then, Git will still be there.
> Maybe this is for pair programming?
You could use it for that, but pair programming is not a direct design intent.
> It's not meant to be a local version control system, unless you enjoy running local Kubernetes clusters (which I have to do, but don't enjoy).
You should be clear about what is a major design trait, and arguably a major design flaw.
Also, there is already standard terminology for this: centralized VCS. I don't understand why you decided to avoid objective descriptions of your project's single most important design trait and instead resort to vague meaningless buzzwords like "cloud-native" or "real-time". In fact, in light of this those terms start to sound like weasel words used to deceive the reader.
When I hear "cloud-native" I think "built to scale up well". As opposed to "built to run on file servers" which means "doesn't scale well at all".
Is that just me?
Also, [1]. I start by saying it's centralized. I'm proud of it. It's the right direction for moving version control forward. And modern use of Git isn't really distributed anyway; it's centralized. We don't push to production from our dev boxes.
Hm. To me, cloud-native screams and PaaS completely out of my control and completely subject to the whims of some company that does not have my best interest in mind. It implies the impossibility of firing up some local instance for experimenting without having fear of leaving traces. In short, something to be avoided if possible.
> It's meant to be the next big thing in version control
I wish you the best but kind of hope it isn't. I want my vcs to be local and conceptually simple. I definitely don't want a client-server architecture!
> And I almost always get laughs and head nods when I talk about the problems with Git's UX.
Yes, the UX is bad. But it's conceptually simple: blobs, trees, commits, pointers (branches etc). I really fear someone will replace Git with something having a better UX but conceptually much more complex.
Complexity bad.
We've gone over this so many times as an industry and we haven't learned yet.
I agree, complexity bad. So why do you like Git? :-)
Git is _incredibly_ complex to understand, as proven for almost 20 years by the vast majority of people who have been forced to use it. And by quite a bit of academic and industry research, for instance, [1].
I can teach you Grace in about 15 minutes. How many days and weeks does it take most devs to start to understand Git? And even when they do, for most, it's only the basics, and please don't let anything go wrong. I mean, there were people for over a decade who made their living running week-long workshops on learning Git. I don't see how you could run a half-day-long workshop teaching Grace, unless you go really slowly.
If you're one of the probably 20% or so who really feels like they understand Git and are in control of it, that's awesome. But you're projecting your experience more widely if you think that's the norm. It's not.
As for local, well, if you're working with a team on GitHub or GitLab or Azure DevOps or some other hoster, you're already doing centralized VCS, you're just using a decentralized VCS to do it. Most shops don't let you push to production from your dev box, right?
> How many days and weeks does it take most devs to start to understand Git?
A few weeks to understand a technology you’re gonna be working with for years to come is nothing.
> And by quite a bit of academic and industry research, for instance, [1].
Isn’t that a positive aspect? It’s well studied and there are wealths of info about it for just about anything you need to do.
I see Grace less as a git replacement and more as its own niche. I certainly see the benefits of easier onboarding and centralization for companies and education but those who grew up with git will likely keep using it
> I agree, complexity bad. So why do you like Git? :-)
I think you're trying to fabricate problems where there are none.
Git's UX problem lies in the way it's CLI is not intuitive for those unfamiliar with it, but a) using GUI frontends like SourceTree allows newbies to eliminate that issue, b) with time you onboard to the CLI and everything just works.
At best, your suggestion to use another user interface is equivalent to suggesting Git users to adopt a new GUI frontend that's polished in a different way.
> Git is _incredibly_ complex to understand,
I don't know what you can possibly mean by "incredibly complex".
For end users, your mental model can be limited to a) you clone a repository, b) you commit and your changes, c) you push your changes to make them available to everyone, d) you pull everyone's changes to have access to them.
This is hardly rocket science. I mean, why do you think Git managed to become the world's de facto standard VCS and sets the gold standard for VCSs?
> I think you're trying to fabricate problems where there are none.
No, I'm not. The problems with Git's UX are well-documented, and have spawned many projects over the last 10+ years trying to deliver "Git, but easier" or "Git, but better", so it's not just me who sees this.
I'm happy for you that you're comfortable with Git, or so indoctrinated to the workarounds required to use Git well that you're used to them. I believe it's time for something very different, and much easier to understand.
> why do you think Git managed to become the world's de facto standard VCS
I think it was because Git has lightweight branches, and an ephemeral working directory, both of which made it nicer to use than the older, slower, centralized VCS's. I've kept both of those features in Grace.
I also think it was because of GitHub wrapping a lightweight social network around Git and popularizing it, at the same moment that shared open-source dev really started to catch on as an idea. Without GitHub, Git wouldn't have won.
I do not think it was because Git is easy to use, overall. Again, maybe 20% of devs really get it, and the rest don't and just hope nothing bad happens. It was better in some important axes, and we've all paid the bad-UX tax to get those better parts, but 2005 was a long time ago, with a very different set of network and hardware conditions, and we can do better.
> I believe it's time for something very different, and much easier to understand.
I'd like to reiterate my request for clarification of the concepts behind Grace. If it's as easy to understand as blobs, trees, commits, and refs, I'm sold!
> Without GitHub, Git wouldn't have won.
True, but git is good not because of GitHub, git is good because it's so simple.
I'm scared you will replace git with something easier but a lot more complex. I don't want easy, I want simple.
No it isn't. Git is just blobs, trees, commits, and refs. Git isn't easy but it's conceptually simple. I'll take simple over easy anytime.
If you could explain the concepts Grace is built on, that'd be great!
> If you're one of the probably 20% or so who really feels like they understand Git
Again, blobs, trees, commits, and refs. I don't know all of git's crazy commands, but they can be explained in terms of these four simple concepts.
> As for local, well, if you're working with a team on GitHub or GitLab or Azure DevOps or some other hoster, you're already doing centralized VCS, you're just using a decentralized VCS to do it.
No, that is still fully decentralized. Each team member has a full copy of the repository, which, if GitHub or GitLab or Azure DevOps or whatever suddenly disappeared could be promoted to be the new shared source of truth.
At my $job-2 we were using GitLab but it often went down. I just set up a git repository on one of my servers and authorized everyone's ssh keys: it took me ten minutes and we had a way to collaborate even with GitLab down. Yes there weren't pull requests or anything, it was just a dumb repo used over ssh. But that was the whole point!
That may be, but the state of your repo, expressed as a combination of the 4 things listed after doing an arcane and globally unique sequence of git commands, is in no way conceptually simple. If it was, the implicit lurking horror that every programmer knows lies inside git would not be a shared traumatic developer coming of age story. You are the exception here.
> No, that is still fully decentralized.
The word decentralized does not really apply here. Is Figma decentralized? Do you ever do peer-to-peer git? Do you really? Or do you kind of just have a single source of truth with a lot of local copies, that allow offline-first workflows that you rarely need.
> if GitHub or GitLab or Azure DevOps or whatever suddenly disappeared could be promoted to be the new shared source of truth.
This is not a selling point you think will be taken seriously, right?
> Do you ever do peer-to-peer git? Do you really? Or do you kind of just have a single source of truth with a lot of local copies, that allow offline-first workflows that you rarely need.
Have you read my comment? Yes I did. I use GitHub to sync my git things, but if it were to disappear I could easily start using something else. Sometimes I push between other remotes too. Each of the repos is self-contained and whole by itself.
> > if GitHub or GitLab or Azure DevOps or whatever suddenly disappeared could be promoted to be the new shared source of truth.
> This is not a selling point you think will be taken seriously, right?
Again, have you even read my comment? This is not a theoretical scenario, it's a thing that happened to me in the past. Thanks to git's distributed nature it was very easy to work around.
No. Vast majority of software does not consist of four simple and easy to grok concepts.
Vast majority of software consists of badly designed abstractions full of various hacked on workarounds for exceptional cases. Such as Subversion, what a horror that was!
> Yes, absolutely, it does. So does Git if you expect to push anything anywhere. And if you happen to be doing dev using Azure or GCP or AWS you need one too.
Sure if you want to push, but only maybe 10% of my Git commands relate to pushing/internet related stuff. The majority of my work is local-only commands that can be run on an airplane without wifi. Git lets me defer the internet required stuff to a later time. Its not clear Grace will allow me to do that at all.
Also I once had a case of working on an air-gapped network. That was an interesting case that I'm not sure Grace would be suitable for at all? Granted that's super niche.
The fact that most of your Git commands are local-only is an artifact of how Git works, but I expect that ~100% of the time, you have an internet connection, so the fact that Grace needs to be connected to the cloud just isn't a thing I worry about.
I'm not writing a new VCS based on the 0.00000001% "but I'm on an airplane without WiFi" case.
There's ~0% reason in 2024 to build software for offline use cases, and even less reason in 2026 and 2028. I'm happy to cede that to Git if you really need it.
As an industry, we fetishize offline for version control only because Git sort-of does that. Again, it doesn't really... you still have to push to do real work with your team, but we need to stop pretending that's a hard requirement. It's totally not, it's just a "feature" of Git that gets in our way today more than it helps us.
> Also I once had a case of working on an air-gapped network.
Coming from Microsoft, and being familiar with the air-gapped Azure instances for government, I designed Grace to be able to run on those Azure clouds. In other words, all of the PaaS services that Grace would use on Azure are present in those clouds.
Even the air-gapped world isn't "offline", it's totally networked, just on a network that's not connected to the Internet.
I haven't specifically looked at similar AWS instances, but I have to believe it's possible there, too.
The design and motivations document is pretty long but doesn’t really describe the design. Things like which language you use aren’t the design, they’re more like design constraints and the environment in which the design happens.
The document about branching strategy [1] is closer to what I’d expect of a design doc.
But it avoids the hard problem of how live merges to the child branch work. What happens when a large refactoring automatically happens in the background while you’re in the middle of editing? In a fast-moving repo, it seems like new compile errors could spontaneously appear at any time, or tests could start breaking on their own. It seems like it would be frustrating to debug.
Hi - can Grace support partial commits somehow? Such as if I want to check in part of a file but not other parts? This is a key feature of Git for my workflows but doesn’t seem to be plausible at all if files are pushed up on save. Unless this would be part of “promote requests” only?
Not at the moment, and probably not for v1.0 unless that bubbles up as a huge blocker.
You could accomplish it with something like:
- Make the changes in your branch
- Make a new branch off of `main` and cherry-pick the changes you want from your branch into that new one
- Commit and promote from the new branch; at this point you can delete the new branch
- Auto-rebase will run and propose a good merge to your original branch, which would include the partial changes you now have both in `main` and in your branch.
I still have to write cherry-pick - not sure that I'll call it that - and promotion conflict processing using LLM's. But something like the above steps would do what you're asking without too much effort.
There's no way to tackle the entire surface area of 20 years of Git in one release. I'm sure we'll see workarounds like that in v1.0 and learn from them to improve 2.0 and 3.0.
Moin! Congratulations on what looks to be a great deal of work for a solo developer.
Whilst you might see some kickback here, I personally think it’s quite brave to take something like version control that has such a large established user base with git and say, I can do better.
Also nice to see this being written in .NET. It’s just so fast these days and multi-platform. If you’re looking for inspiration for the various clients, I recommend the open source BitWarden project. I’ve learnt a lot from that.
Why does the concept of a commit have to be broken into three distinct concepts; checkpoint, commit and promotion? Apart from communicating intent, what does the distinction buy me? There may be a good reason for having these baked into the VCS, but it's not clear from the readme, so I think most git users will just get the impression that grace imposes a particular workflow and forces the user to perform extra administrative tasks.
A better question is: why does git only have one gesture for it, when devs clearly use it to mean different things already?
All of the squash vs. no squash debate, which may or may not influence the way you use `git commit`, is a workaround - that we've forgotten is a workaround - for the fact that Git has only one way to say it.
Another way to say that: one of Git's leaky abstractions - the "commit" - forces us to use workarounds to make sense of it and how it's used and where it should be tracked and shouldn't be tracked.
Grace just decomposes those separate use cases into their own gestures to make it easier for you to track your own work in your own branch. If you want to see all of the references in your branch, `grace refs`. If you only want to see the checkpoints and commits - i.e. you want to see the versions that you explicitly marked as interesting for one reason or another, you have `grace checkpoints` and `grace commits`.
Promotions are what Grace uses instead of merges to move code from a child branch to a parent branch. We sometimes call merges "commits" in Git, and, again, leaky abstraction and overloaded term.
> so I think most git users will just get the impression that grace imposes a particular workflow and forces the user to perform extra administrative tasks
A short intro to Grace - like 15 minutes - will change that impression, I hope. Most of Grace's workflow will be the same as Git, some of it will be different, and that's OK. New tools bring new ways of working, and that's a good thing, especially when looking at Git's UX.
> A better question is: why does git only have one gesture for it, when devs clearly use it to mean different things already?
Do they, though? I mean, most users simply use a GUI layer on top of Git, and thus often are oblivious to what Git is doing under the hood.
> All of the squash vs. no squash debate, which may or may not influence the way you use `git commit`, is a workaround - that we've forgotten is a workaround - for the fact that Git has only one way to say it.
No, not really. At best it's a debate over which branching strategy a team wants to standardize over.
I happen to be working in a team which after months of doing non-ff merges of PRs it's starting to favor squash merges, and there is absolutely no discussion over the topic. Everything boiled down to "the history looks noisy, let's squash to remove noise as GitHub still tracks feature branches", followed by "sure, why not? If this doesn't work out we can fall back to non-ff". Done.
One of the problems that GitHub and GitLab are going to face in the coming years, as Git gets supplanted whatever wins, is that "Git" is in the company name. Those names are going to sound they like provide yesterday's tech, in a hurry.
I don't see a venture-driven way to be the thing that replaces Git. And I don't see a way to replace Git without being developing in the open. So, no GraceHub.
Any plans to assimilate the build system as well? Grace seems to handle large files and can deploy stuff to developers and servers. Why have a separate system for deploying build artifacts to developers and servers then? By integrating with the build system a VCS knows which files are build artifacts and which are sources.
> Any plans to assimilate the build system as well?
No plans, not at all.
One of the design questions I've had in mind the entire time I've worked on Grace is: "What belongs to Git, and what belongs to GitHub?" (or GitLab or Azure DevOps or etc.).
I'm interested in completely replacing Git, but being very selective about pulling anything into the version control level that really belongs at the hoster level.
The only big thing I blurred the lines for is the including of Owner and Organization entities, to make multitenancy easier to support. My implementations of Owner and Organization are super-thin, just really hooks so the hosters can connect them to their existing identity systems.
The big hosters already have massive CI/CD and build platforms. The Grace Server API - and Grace Server is just a modern, 2024-style ASP.NET Core Web API, with no special protocols - will give us the ability to create, for instance, GitHub Actions that take the place of the Git fetch action that we all use today in our pipelines.
I'm happy to let the product and engineering teams at the big hosters figure out how to integrate with Grace.
Obviously, the intention is to get there. It's still an alpha, and it's not ready to be trusted for real yet. (And that's OK.) There are a lot of features yet to be written.
With that said, it does do the basics well: save/checkpoint/commit/promote/tag, diff, status, rebase, list refs, ls for local version, ls for server version(s), etc. And it's fast. Still much more to do.
Funny story: At the beginning I was using both Git and Grace at the same time (a .git directory next to a .grace directory to drive them) on the source code. Then I worked on auto-rebase, and had a bug that deleted some of my source files . I was able to revert from Git, of course, but after that I decided to do my testing in other directories.
Many people before me have already pointed out many of the pain points of this project, but I'd like to ask you a few more things.
I'd like to start by congratulating though, this is no small feat!
As I understand it, this project requires an online connection to a hosted service. Said service is complex/heavy enough that it requires a k8s cluster or similar to run on, with databases, object storage, queues ecc..
Many already pointed out the unnecessary complexity, but the first thing i think of is:
> I'm never going to use this for my projects.
As in: My laptop is full of dozens, maybe hundreds of started/ongoing/stalled/failed/abandoned projects, Only a fraction of those ever leave my machine.
I can start a project with a local git repo as easily as one folder and one command, and have peace of mind knowing that i can still record any change i make and archive the important stuff if it ever takes off.
What about checking out other people's work? To contribute now i either need my own compute, or have access to someone else's compute.
This all screams expensive , and we haven't even mentioned AI.
As a few already said, uploading every keystroke seems madness to me. I may be not as good a developer as others, but I constantly do things that I would not want uploaded anywhere, sometimes use dirty hacks like hardcoding secrets for testing.
Let's not kid ourselves, we've all been in a position where the code was not well structured and we had to put in a magic string to make things work. Now the peace of mind that comes from the `watch` command made you upload everything, and maybe someone else's repo has already been auto-rebased onto it.
I get that you can not use the auto `watch` command, but it seems to me the whole project is centered around it.
Git is portable, I can share a folder/tarball and be done with it, this seems like google docs for coding, could I change provider if I wanted to? Backups now seem a fairly complicated ordeal.
As a side note, I've worked for one large national telco, I couldn't believe the amount of times servers broke or the VPN/Firewall/Wifi/network fairies had a bad day.
Having the entire VCS be online only leaves me with an unshakeable scary feeling.
The only environment in which I see this being a possibly reasonable choice is an enterprise one. Unless someone makes a bet and starts offering a hosted version this looks to me simply not it for one developer, too expensive/complicated for a small group, yet too new for a large organization.
And without the drive that comes from single developers that know how to use and trust it, the adoptions comes down to a bet made by some manager.
If i had to make one question, what is the future that you forsee for Grace? How would you spread its adoption without individual developers using it?
> I'd like to start by congratulating though, this is no small feat!
First of all, thank you. <3 It's been a journey, and it's only going faster. I'm more proud of Grace than anything I've ever written. And thanks for the long comment.
> Many people before me have already pointed out many of the pain points
Many people have reacted based on years of Git brainwashing, yes. :-) The people commenting here are usually the ones who deeply understand Git and wonder why other people don't and "what's the problem?" My experience in the last couple of years is that the reactions from that crowd have been mostly negative because they don't feel the pain that the other 80% or so of devs feel.
It's not unlike any other new technology. For example, SQL Server expert: "Why do we need a document database? SQL Server does what we need! This is a waste of time, just use SQL Server correctly!" Service Fabric expert: "Why do we need Kubernetes? Service Fabric does what we need! This is a waste of time, just use Service Fabric correct!" C++ expert: "Why to we need Rust? C++ does what we need! This is a waste of time, just use C++ correctly!"
It's like that.
Git has terrible UX, but it's the pain we know, and the workarounds that we're used to that we don't realize anymore are workarounds. Git is not the final word in version control, and we deserve better. There are other, much better ways to do things. Really.
> As a few already said, uploading every keystroke seems madness to me.
I never said "upload after every keystroke". Grace has no idea that you've typed anything - it's all user-mode, my days of writing Windows kernel-mode hooks are long since past. lol And I've never written a keystoke logger! eww...
It does use a file system watcher, so it knows when files in the directory you're tracking (i.e. the one with a .grace directory) have changed.
Right now, there's no explicit `grace add` like there's a `git add`. There's just a .graceignore file. I find Git's need for an explicit add gesture to be bad UX; again, it's UX that most have gotten used to, but that doesn't mean it's right. If enough people really want add to be explicit, we'll make it work but probably not make it the default behavior.
The only time auto-rebase happens is when the parent branch of your branch gets updated. We can all expect that, like GitHub has today with pre-receive hooks and Secret Scanning, that your files will be scanned for secrets and handled appropriately. The different for Grace is: deleting a version that you don't want is an expected, normal function, unlike rewriting Git history and hoping that everyone else sharing the repo does their fetches and rebases and whatever appropriately to remove the unwanted version.
> could I change provider if I wanted to?
I haven't written that level of import/export yet, but it'll have to exist at some point. Changing hosters is a rare, once-in-a-few-years-if-ever event for most organizations and most individuals, because it's not just about the code, it's about the CI/CD and packages and issues and PR's and project tracking.
> Backups now seem a fairly complicated ordeal.
Yes and no. I'll offer a backup in `git bundle` format, so that's simple. I have no intention of writing a live Git-to-Grace sync, the branching model is different enough that the corner cases would be hard to deal with.
On the server side, yes, like every other cloud-native system that uses more than one data service, backups will need to be coordinated. I've written a short paper on that [1].
> The only environment in which I see this being a possibly reasonable choice is an enterprise one.
Hard disagree. Enterprise is definitely a main target for Grace, but there's no replacing of Git without making life better for open-source devs as well, and Grace is meant for them/us. Personal branches on open-source projects, not forks, plus auto-rebase that keeps your personal branch up-to-date with `main` instead of walking up to a fork after weeks or months and seeing `234 commits behind` and declaring bankruptcy... until you actually use it, it might be hard to see how nice auto-rebase is, but, really, it changes how you feel about how clunky and manual and disconnected Git is.
Giving developers a different, much better UX is enough reason for open-source to adopt it, but when you see how fluid and connected it becomes to work together in open-source with Grace I expect it'll catch on.
> what is the future that you forsee for Grace? How would you spread its adoption without individual developers using it?
Short version:
Git is reaching it's EOL, for a few reasons. Grace is intended to be ready to meet the actual needs of developers in the late 2020's, not the mid-2000's like Git. Individual developers will use it. And the UX is so much better once you try it that it won't be a hard sell for most.
Longer version:
Git, as used today by most everyone, is a centralized version control system, that we access through a confusing distributed version control UX. Unless you push to production from your dev box, you're only shipping code by running `git push` and seeing that code run through some centralized CI/CD pipelines. This is one indication that the use case for Git, and the design of Git, have diverged enough to look like it's time for something new to come.
We all have to come to terms with the fact that we've found Git's fundamental design limits. As monorepos have come into fashion - and if we do nothing else in this industry, we follow fashion trends - we're seeing more and more that the only way to do large monorepos well is to use `git scalar` and partial clones so we don't clog our machines and Git servers with unnecessary traffic.
Once you're using `git scalar`, you're explicitly using Git as a centralized version control system, to run a repo that's centralized on a hoster, and the size of those repos forces GitHub and GitLab etc. to constantly invest in how to scale up the server side to match customer demands. Don't forget, the hosters all run Git protocol, but how they store data behind that protocol is the secret sauce of taking a mid-2000's client/server thing like Git and making it web-scale, and the demands on that scaling are only going up.
So, we've broken the client-side contract of Git - Git has the full repo on every machine! - with partial clones, and at some point the only way to scale up the server side is to not use Git repos, and break them up into object storage (this is what Azure DevOps does). So... it's no longer Git on the client, and it's no longer Git on the server. Why are we clinging to this thing?
This is what it looks like when a technology has reached it's EOL, and it's time to find something new.
Individual developers will 100% be using it... we'll all have the same free accounts on GitHub or GitLab or whatever hoster we use, and when we start up projects, they'll just be in Grace repos at those hosters. The vast majority of developers don't care how their version control works, they just want it to work. Grace is so much easier to understand than Git, and few people care about how much is local and how much is cloud, as long as it works and it's fast.
No one will force you to use Grace for your individual projects, but at some point, after using Grace, I don't think you'll want to go back. If you want to keep using Git yourself, it's not going anywhere.
I find it amusing how attached to "local repos" some devs have become when we have everything else we do live in the cloud, or synced to the cloud, and it's not a big deal. Source control isn't a different category of thing that must be local. It's just a no-longer-relevant habit from Git.
There's a lot of salesy talk on the Github page and like zero examples of real world use and/or how it compares functionally to git.
>Branch-level control of reference types
Please. Do not encourage this. This is just begging to become a mess, especially in B2B .NET/Java shops that like to over complicate things.
>Grace Server is designed to run on fast, cloud-based PaaS services for incredible scale and performance. Grace uses virtual actors as networked, in-memory data caches to maximize performance.
What? Why?
>Grace uses SignalR to create a live, two-way communication channel between client and server.
Oh god NO. SignalR is a pain in the ass to manage.
>When your parent branch gets updated, within seconds, grace watch will auto-rebase your branch on those changes, so you're always coding against the latest version that you'll have to promote to.When there is a problem, auto-rebase lets you find out right away. You can fix it while you're in flow, and skip the conflict later.
This sounds nice. How does it handle merge conflicts? Where is an example here? Does it store the results of that merge resolution in case another update was pushed to the remote during that time?
I'll use a simple example, with one `main` branch and however many child branches, like most repos.
In Git, we merge to `main`. In effect, we take code from some other branch, and we shove it towards `main`, using a merge operation which sometimes results in code being created in `main` that's never existed in any other branch.
This seems... odd. Even if we're used to it, it's just weird. Why would we want something in `main` that's never existed anywhere else, that's never been on a developers machine before, that's never been tested locally? So we build CI/CD pipelines and automated testing to check that code that's never been seen anywhere else and hope it's good.
In Grace, changes move from `main` to the child branches through (auto-)rebase. There are no merges. The "merge" takes place during rebase, when changes from `main` are applied to the child branches.
A promotion in Grace takes an already-existing version that's been committed to a branch, and simply creates a new reference to that root directory version in `main` that doesn't change any code at all. It's a simple database insert, actually.
> Branch-level control of reference types
Oh, hell yes. This is a great feature, actually. When `main` only allows Promotions and Tags, you're guaranteed that nothing gets to it without a promotion, which we can expect we'll have proper authorization controls for, and proper event-driven pre-checks with CI/CD, etc.
No accidental "oops I pushed a secret and now everyone has to deal with rewriting history". Even if you push a secret to your branch, you can delete that save or checkpoint or even commit immediately. No harm, no foul. And the hosters will have something like GitHub's Secret Scanning to make sure that stuff doesn't end up in `main`.
> SignalR
Fortunately, you won't have to manage it. ;-) The hosters, like GitHub or GitLab, will manage it for you. Microsoft already has first-party services with hundreds of thousands of concurrent users on SignalR, I'm not worried about this at all.
Maybe a number of my inquiries or concerns would be answered by me watching the video on the GitHub page and/or playing around the tool, but I'll yap here for now.
FWIW I appreciate a project of this scale using F# (not that the choice of language matters, it's just nice to see).
>The "merge" takes place during rebase, when changes from `main` are applied to the child branches.
This is just a difference in operation, but the problem you were alluding to in your previous sentence still exists in poorly trained teams.
Git has rebases. Grace has rebases. Neither solve the issue of "that's never existed anywhere else, that's never been on a developers machine before, that's never been tested locally."
>A promotion in Grace takes an already-existing version that's been committed to a branch, and simply creates a new reference to that root directory version in `main` that doesn't change any code at all.
This sounds like the equivalent of simply updating what commit a branch label is looking at in git (i.e., "fast-forward" is one of many ways this can happen as well as explicitly just changing what commit a branch is pointing to)?
>In Grace, changes move from `main` to the child branches through (auto-)rebase.
How do I revert or demote/"unpromote" a set of set of "grouped" changes that I no longer want on a branch? If it's a series of auto-rebases, do I have to define the set of changes I want to "revert" or demote?
If I have a rather complicated shared history between branches, can I "easily" query all the places a commit exists at and when it was promoted (or other actions related to it)?
>you're guaranteed that nothing gets to it without a promotion
The issue isn't gatekeeping things to a branch or repository -- we've solved that already. The issue is allowing and even encouraging an influx of complicated processes and subtle permission cascades that will likely mimic the company structure. Which is never a good thing since they are often not purposeful or well thought-out.
The Linux kernel project for example handles this by having authoritative repositories which is as wonderfully simple as it is strikingly effective (and sane). I've always used this model as well with orgs I've lead.
I have many other questions, but here's my rather ignorant or inexperienced take on some of the choices here. The good here is that "forcing" people to use rebase is absolutely the right thing to do IMO, it's the correct operation for 98% of VCS operations. The bad is that some of the design choices seem to echo what was done with SVN, CVS, Sourcesafe, TFS, etc.
Right or wrong the similarity or seemingly similarities with centralized VCS's of the past and the problems they implicitly have make me uneasy.
>Microsoft already has first-party services with hundreds of thousands of concurrent users on SignalR, I'm not worried about this at all.
I think we'll just agree to disagree here. I've used SignalR on and off since ~2014 (if memory serves correctly) and I've never had an amazing experience with it.
Looks really interesting but I feel the README spends way to much time before getting to the point of how this differs from git. I would lead with the "Grace is an event sourced version control system ..." section.
It looks to me to be a continuous replication backup solution masquerading as a DVCS. Combining them is problematic in the real world because of local noise others don't need to see and of inadvertently committing secrets and temporary large files. The storage and networking demands for this would be enormous.
Devs already do this all the time. That's why GitHub has Secret Scanning. The need for that kind of service doesn't change if the VCS changes.
> temporary large files
Saves in Grace are ephemeral, so those files will be deleted when the save references are deleted. There is a repo-level setting for how long that is, current default is seven days, but we'll see what makes sense.
> The storage and networking demands for this would be enormous.
Fortunately, Azure Blob Storage and AWS S3 and Google Cloud Storage are effectively infinite compared to any requirements from a version control system.
I've only written the Azure Blob Storage implementation so far, but the idea is that Grace offloads all file upload/download traffic to those object storage services, using Azure SAS tokens [1] or AWS Presigned URL [2], etc.
Nope. Last thing I want is for my repo to become huge just because I happened to temporarily create a big file inside the repository directory. Or passwords and credentials to leak because I needed to do a test run.
I can see it now: A new kind of malware that repo bombs your organization by randomly creating 2TB files in some guy's repos. Every single person becomes the potential point of network collapse.
> every save and commit, every branch name change, every everything, is stored as a separate event.
Why would you DO that??? Every branch name change??? That's OCD level crazy.
> Imagine: there's a promotion to main, your branch gets auto-rebased on those latest changes, and then your local unit test suite gets run automatically so you immediately know if there's a problem.
Oh my god NO!!! The last thing I want is a ghost in the machine changing stuff while I'm working on my branch! Imagine the function you're working on changing mid-edit due to a rebase! What hell is being unleashed here?
> Grace lets you share your code with team members effortlessly, around the world, for those times when you need another set of eyes on it, or just want to show them something cool.
You can already do that! It's called a branch.
> C'mon, it's 2024. There's got to be some AI in here, right?
OMG Seriously???
> No stashing
Okay, now you're just taking the piss...
By this point I'm checking to see if it's April 1st.
Imagine you did have an idea, put a lot of effort building it, just so that some random person takes a dump on your work. It wouldn't be nice, would it?
Unnecessarily harsh and misses the point that this is a new VCS that brings valid new ideas to table. As with any new thing, if it's not for you, it's not for you.
SBArbeit, ignore this kind of comment. Not because it's not valid feedback, but because it isn't worth it.
There's no way to build something with an intention as big as "replace Git" that won't invite knee-jerk reactions.
I know I'm building the thing that aligns with my creative and technical vision. That's all I can do. It will succeed or it won't, and the reactions from people who are already super-comfortable with the existing technology matter less than the reactions from people who only understand the basics of Git and are afraid of it. I'm building it for them (which includes me).
You can use Grace without running `grace watch`, but really it's a much better experience if you do. It makes version control more ambient, more something that just happens in the background. And you can delete any save or checkpoint that you don't want, it'll disappear.
I mean, I assume you don't type curse words every time you edit a file. I don't... it's only like every 10 edits or so for me. ;-)
Yes, I presume that almost every developer in the world, by the late 2020's, will have a working Internet connection ~100% of the time. If you still need some sort of offline VCS, Git will still be around. Treating source control as something that must work without an internet connection is just silly to me... we almost all use Git as a centralized version control system already. You can't push without an Internet connection, and most dev shops won't let you push to production without going through the (Git)hub first.
It's not like Grace will prevent you from making changes; as soon as you have a connection again, `grace watch` will catch you up. Or don't use that and just do `grace rebase` or whatever. Up to you.
> C'mon, it's 2024. There's got to be some AI in here, right?
Wrong.
This is a perfect way of convincing me _not_ to use the tool. I'd take dumb, dependable, predictable, easy-to-explain tools over "smart", "AI-enabled" ones any day.
From the vision-list I understand that I'm not the target audience since I'm not afraid of my version control system, but I kind of think that maybe I'm also not unique in that I do a lot of swearing and insecure stuff and nasty tricks locally in the repo when I'm developing and it sort of seems like a really, really bad idea to stream it to a remote server...?
Interesting but this page reads like a marketing piece.
The discourse around VCSs is mature and you should BLUF with where yours sits in the spectrum of alternatives.
What is the repository model and the concurrency model? Those are the basics.
Seems to be centralized & um, merge or lock, maybe? I don't want to have to parse lots of fluff to find that out.
Then, what is the storage method and change scope? How do revision IDs work? No idea and I am not going to dig further.
Final comment is that "100,000 files with 15,000 directories" isn't really that big. How many TB did that represent? How many revisions/branches/history?
> Interesting but this page reads like a marketing piece.
I would go a step beyond and say that the page reads as if it was purposely designed to confuse the reader in order to hide the project's true features and capabilities, and instead create the baseless idea that it solves problems that it can't even state clearly.
It reads like it's the zombo.com of VCSs.
It's even more perplexing that it tries to portray itself as the next best thing, when it even fails to make a case on whether it even works at all.
1: Lead with the most critical differentiators versus git. Source control systems are a dime-a-dozen. The "I can do this better" projects that fail are a dime-a-dozen. Get us hooked within the first 3 paragraphs.
2: I personally don't (usually) encounter performance issues with git. I get more from Visual Studio than git. The only performance issue are the random garbage collections, and those are infrequent.
> When your parent branch gets updated, within seconds, grace watch will auto-rebase your branch on those changes
3: Automatically changing my code while I am working on it is a horrible idea. DO NOT DO THIS. Often, programming requires low distractions, and the distraction of changing the very thing that I am working on while I am working on it is a huge distraction.
The dangers:
- If I am trying to isolate a specific behavior, auto-rebase could change the behavior of what I'm trying to isolate while I'm trying to isolate it.
- If the upstream rebase creates a build error, it will completely destroy my flow. (It makes me stop a task while I am incomplete.) This could happen if the parent branch has a renamed method, and my changes introduce a new call to that method.
Instead, I suggest a very gentle notification about parent branch updates. Don't steal focus.
Edit: I was lead developer on a desktop file sync client for 9 years. Whenever I had a support escalation from someone using us as source code control, I'd always recommend that the customer use git and tell them that we used Github. Source code repositories have A LOT of transient files that come and go very quickly, and are very hard on live file sync.
Aren't people using something like git-wip to commit changes to WIP branches every time they save? I've been doing it for years. The number of times I've gone to get something from those WIP branches: 0. Seriously. I keep it on "just in case", never needed it. Then again I do have a full tree-based undo system in my text editor.
I'm a long-time C# dev who got into F# about five years ago. F# is so awesome, I hope that if Grace catches on, that more people will pay some attention to it.
A big design driver for Grace is eliminating fear around using version control.
Merge conflicts are one of the big places where fear can happen, even if it's subtle; you think you're all done with your work and then... ugh, what happened? are my changes stepping on the other changes? are they stepping on mine? do I have to retest? etc.
Having an event-driven, automatic AI review of a conflict is just something that makes sense today. I don't just want to tell you there's a conflict, I want to tell you there's a conflict and give you a solution to it that you can accept with one click if it looks good. Grace will even be able to run CI/CD pipelines against that proposed resolution before it tells you about it to help you make the decision.
I'm not sure how that makes sense today, to me it's more of a novel idea than a solution to a problem. I use AI every day in my workflow and I honestly don't believe it's capable of reliably solving merge conflicts. GPT-4 can write a really nice method but can't integrate it into my code-base. If half of the proposed solutions are incorrect, I need to scrutinize all proposed solutions equally. Would I rather review likely inaccurate conflict resolutions, only to solve it myself afterwards, or resolve it myself from the get-go in a way that I know makes sense? Do I want to give my teammates the ability to click a button and introduce subtle merge bugs?
Don't get hung up on what GPT-4 can do. That's irrelevant. Even Sam Altman calls GPT-4 "mildly embarrassing".
"Don't skate to where the puck is, skate to where the puck is going." In 2028, for instance, will GPT-7 (or whatever) be able to handle solving a merge conflict? I expect it will.
It's totally cool. It's not a product yet, it's a one-person project so far. I'm not asking anyone for any money. I expect one day that it will be a product.
It's still an alpha (see the highlighted note towards the top of the readme). There are features I intend to ship in 1.0 that aren't even started.
And even if all goes as well as possible, it won't ship 1.0 until 2026 at the earliest, which means early adopters, and mass adoption not until 2027 or 2028.
If you're going after something as big as Git, it takes time. It would be irresponsible to not think about where computing will be in a few years vs. today as I work on it.
I guess marketing win if we're talking about ai (which is ML!) in context of some obvious and deterministic operation as merge. (go see rerere https://git-scm.com/docs/git-rerere)
Looks like instead of VCS fear developers would get a code change fear. Because any mid to complex refactoring attempt would immediately break somebody's code after promotion.
I am sorry, maybe I am a boomer but I don't like it one bit.
> Grace is a new, modern, cloud-native version control system.
The whole point of git was to replace crappy centralized solutions
> Grace Server scales up by running on Kubernetes and massive PaaS services from large cloud providers.
word salad
> Every save is uploaded, automatically
Can't wait for the passwords to start leaking.
Grace simplifies this by breaking these usages out into their own gestures and events:
> grace checkpoint - this means "I'm partially done", for you to keep track of your own progress
> grace commit - this is "I'm really done" or "This version is a candidate for promotion"; you'd use a commit for a PR
> grace promote - in Grace, promotions replace merges; a promotion is how Grace moves code from a child branch to a parent branch
I have no problem with this, I even like it.
Some around here will remember before "MRs", well then called "PRs" were ubiquitous, Git can provide different workflows. We can write tools on top of git, like github, to do more fancy things. I think it's high time for some new concepts backed by git
> When your parent branch gets updated, within seconds, grace watch will auto-rebase your branch on those changes, so you're always coding against the latest version that you'll have to promote to.
I don't think I ever wanted to auto rebase on every time I save. In fact, I think I really don't like this... in fact, we might as well just all start co-editing on the same server
> Personal branches, not forks
> With Grace, there's no need for forking entire repositories just to make contributions. In open-source repos, you'll just create a personal branch against the repo.
This is what everyone does normally, except Github made forks a thing or whatever so you could clone someone elses repo and hack on it without having mainline permissions.
> Personal branches, not forks > With Grace, there's no need for forking entire repositories just to make contributions. In open-source repos, you'll just create a personal branch against the repo.
> This is what everyone does normally, except Github made forks a thing or whatever so you could clone someone elses repo and hack on it without having mainline permissions.
Yeah. I'm an open-source maintainer and the "everyone creates a branch on upstream" model is a complete non-starter for me, even if only due to the mess it would cause. This is a massive step backward from "fork and pull."
Sounds like a marketing service had the idea to "simplify git", the presentation sounds like they haven't heard of git flows, and no offline mode is total no-go for me.
> Think about this: if your Internet connection went down, could you continue to do your job
Reads like "don't you guys have phones". My bet is it will be forgotten in a year.
First, Grace is still an alpha, and isn't ready to be trusted for real yet. Too much work left to do, and so far it's just me.
Second, as I open up to contributors, we need a place to work together. GitHub is the Home of Open Source. It's not just about the version control system; it's about the social network and features built around it, and right now GitHub has all of the other features, and I don't have time to build Repos, Issues, Discussions, PR's, Actions, and the rest of what GitHub has in addition to a VCS.
At some point, I will run Grace's version control in Grace, and create a way to use GitHub Actions to automate things from there, while keeping Issues, PR's, Actions, etc. somehow in GitHub. It's a ways off, for sure.
I like the idea of it being easy to use. I can get by with git and mercurial, but I find that occasionally I fuck things up and don't really know how to fix it other than manually copying some files around and blowing away a branch. A lot of the error messages just feel obtuse.
> Branching strategy is the thing about Grace that's most different from other version control systems. Because it's so different, it's worth going over it in some detail.
> In single-step branching, each child branch can be promoted only to its parent branch, and must be based on the most-recent version in the parent before being allowed to be promoted.
Um. Looks like it's a forced rebase with ff-merge. It's quite a stretch to call it "so different". Basically it locks you with a single server and prevents any decentralization. It's interesting how you solve consistency issues in cloud environment, this very invariant should not be violated.
Random thought, but I used to use version control (Git) for work. After I quit, I kept programming for fun...and to be honest I like programming without version control a lot better. I realized I hate version control. It imposes too much structure on programming, like having to keep all my drafts of the past. It's rather stifling in fact.
Now, I know version control is super useful for large projects, especially of the commercial variety. But now I see all the modern software out there and I wonder if maybe the increasingly commercialization of the software world has in part been accelerated by version control, which makes programming more robotic and less artistic.
Interesting perspective. I feel the opposite; in my personal projects, I love having Git because I can fearlessly rework things knowing I can just abandon the rework if it's not playing out, or stick the rework in a branch and come back to it later. Different strokes!
True enough! I like the lack of version control because it feels like surviving in the wilderness with less safety gear and more self-reliance, or swimming in a lake without a life guard.
I can’t understand what you mean at all. How does it make programming more robotic? I feel more empowered to be creative when I know I can get things back to a valid state if I break them too badly.
For personal projects I often just use an alias that commits in my project directory with no commit message. There’s no burden to it. Sometimes I might choose to do a detailed merge commit to add a description for a batch of changes once I’ve settled on something I like, but usually I don’t bother if I’m not going to be working with others.
I feel a lack of version control system is like adventure with less safety gear and more self-reliance. It's a bit more interesting to know that I can break things and thus, I am more careful and use my own senses. It's like a video game without instant save.
When I write code for fun, I use Git. But I only have the master branch, most of my commit messages are some variant of “asdfghjkl”, and instead of isolating logically separate changes into their own commits, I just commit the state of my working tree every so often. In other words, I use Git mainly as a glorified undo.
After spending my work hours justifying and polishing every single change, it’s nice to have a more free-form process for my own stuff. In fact, while my process is partly borne out of laziness, I think it can sometimes lead to better code. If I see something that’s just a little suboptimal, I can fix it on the spot, without having to go through a bunch of process or explain why I think the new version is better.
Still, I can’t imagine living without that glorified undo. If anything, it makes the process more free-form rather than less. I can make drastic rewrites if I feel like it, safe in the knowledge that if something breaks, or if I decide halfway through that the rewrite was a bad idea, I can just `git stash` and get back to the old state (while still having the stashed changes saved if I change my mind again).
I'm the opposite. Having many years of experience writing software personally and professionally before version control systems became ubiquitous, I have way too many horror stories to tell...
* damage done by a single careless command
* coding around in circles because you forgot that you changed this one little thing in a file somewhere and now you've made a mountain of irreversible (without VCS) changes to your codebase
And so many other nasty things that coding without a seatbelt brings.
One of the design intentions for Grace is to make version control more ambient, more something that just happens in the background, so you get the benefits of file-level undo and diffs and change tracking without having to do anything, until you're ready to run `grace commit` and `grace promote`.
I'm like you: for small personal projects I either don't bother with source control, or I update it rarely. I'm the first beneficiary of the ambient approach that Grace has; it's really nice to just have all of my previous versions automatically uploaded and tracked, and then deleted a few days later if they're not pointed to by some other reference like a commit or promotion.
I'm definitely the opposite, however I definitely use version control differently and less rigidly in personal projects. I commit more, care less about making the "perfect" commit, and making one change per commit. It's a lot more freeform in my personal projects precisely so it doesn't get in the way and take the fun out of the project.
> I realized I hate version control. It imposes too much structure on programming, like having to keep all my drafts of the past. It's rather stifling in fact.
That's like saying that using an editor with an undo button is too stifling.
Don't confuse the act of putting paint on a canvas with the finished result. Programming is not art, it's a means to an end.
> It imposes too much structure on programming
The structure that tools like git provide is based on decades of know-how from people who use programming and source code control as a tool to accomplish a result.
You may decide that painting with oil or acrylic paints on a canvas is less artistic and more structured than using crayons or markers on paper. But there's a reason why artists gravitate towards certain tools to make certain results. The time invested in using the tools correctly is required to make the desired result.
In my case, I don't use git for one-off programs and experiments. Once I plan on working on a project for an extended period of time, it's extremely helpful.
Programming absolutely has an artistic side. I’ve read beautiful, elegant code that expressed powerful ideas in ways that made them seem obviously correct. You don’t get that from a pure engineering approach. Mathematicians know what I mean here.
> I’ve read beautiful, elegant code that expressed powerful ideas in ways that made them seem obviously correct.
Oh no, don't confuse the brush strokes with art.
A computer program isn't an idea: It's a set of instructions to the computer to accomplish that idea. The "elegant code" is merely the brush strokes, the idea lives in your head and in the resulting program.
As someone who writes that kind of source code: It's not art, it's years of discipline and careful attention to detail.
A different way to say it: A song, and its performance, are art. The sheet music, and the filing system, are not. Yet, both are important and critical tools for preserving and communicating the song. The "legibility" of sheet music comes through discipline, but isn't the art itself.
Git is the filing system, and source code is the sheet music.
>I realized I hate version control. It imposes too much structure on programming,
Having used several different VCS, I have no idea what you're talking about. Are you conflating some complex CI/CD setup with branch naming schemes and oodles of hooks with version control in general? I mean, I think git is slightly overly opinionated when it comes to the relationship between commits and branches, but I don't get the impression that is what you mean.
The centralization and also the monitoring are hard nopes for me.
If the monitoring was all local and not pushed up to k8s instantly, maybe. But this screams of a PHB's wet dream.
As someone who has maintained forks of large FOSS code bases. This also makes doing that harder. I wasn't interested in sharing my changes. But I wanted SCM to allow me to merge upstream's changes.
This feels like a step back to systems like Clearcase, and I.. Just don't want it.
Not being able to work offline at all seems like somewhat of an issue.
99% of the time it's fine, but if you have even one thing you want to do locally, now you have to use two separate tools.
I'm not sure how many people have an absolute hard requirement for local operation, but it is a nice extra layer of peace of mind.
Git might not be perfect, but it's one of the only tools witch such a large market share. All your projects can use the same VCS, which is really nice.
On the other hand, some programmers seem to really like having different highly specialized tools for every job, rather than adapting jobs to fit the tools, so I'm sure there's demand for a new VCS there.
> Working from a hotel coffee bar and the WiFi goes out? Bummer if that blocks your work.
Grace doesn't change anything about that for you. You still write code locally, and `grace watch` will sync you up when you reconnect.
You can't `git push` without an internet connection, either, and you can't work on your Azure / GCP / AWS resources. Or your Google Docs. Or your email. Or your Teams chat. Source control isn't a special case that must work offline, it's just that we're used to Git, and we pay the bad-UX tax of using a distributed VCS as a centralized VCS, because very few of us push to production from our dev boxes, only from a (Git)hub.
Assuming a working internet connection in the late 2020's (i.e. roughly when Git will be replaced by some competitor) is not a blocker for almost every developer in the world. And Git will still be there if offline usage is a hard requirement.
Hard disagree. It’s inherently true that I can’t work on remote servers and all those things but VCS are not inherently remote. I have entire local hobby projects I’ve never git pushed anywhere because my local copy is authoritative. We as an industry largely moved off centralized VCS because of those advantages of having 100% functionality on an airplane.
> I have entire local hobby projects I’ve never git pushed anywhere because my local copy is authoritative.
Isn't this backward? Your local copy is authoritative because you've never pushed anywhere (-:
I know it's probably overkill and you should just use git (or just use folder backups or something if it really is just local) but you could in theory run the grace server locally too, I imagine.
> Not being able to work offline at all seems like somewhat of an issue.
I agree. If it's intended to be a web-based system, why not have it running as a webserver on your local machine?
Also, the intention is to have a GUI, or rather 4 or 5 seeparate GUIs:
> Grace will have a native GUI app for Windows, Mac, Android, and iOS. (And probably Linux.)
But if you're having a web UI, why have a GUI as well? It is just extra work, and it is quite likely that the UI for these tools won't be the same, potentially making them harder to learn.
The intention is that they'll be identical, because they'll be written using Avalonia.
And I hate Electron. I hate it so much. I hate the web pages that I'm being told are "apps". I hate the lack of keyboard shortcut support. I hate the bad performance. I hate how it's not stick-to-your-finger fast. I hate owning unbelievably powerful computers and mobile devices and not taking advantage of their power and capabilities, and just reducing them to surfaces for fake "apps".
I'm not a fan of it either. When I say have a web UI, I don't mean electron, I mean the program runs as a web server on some port your local machine and you point you web browser at it.
I don't think electron should exist.
> I hate the lack of keyboard shortcut support
And I hate the existence of keyboard shortcuts, or at least hard-to-discover ones. On a daily basis I accidentally press keys (I've no idea which ones) that make my windows misbehave.
I also don't think I should have to learn keyboard shortcuts -- interfaces should be discoverable and not require you to memorise stuff.
> I hate the bad performance.
Then use a text UI not a GUI. Back in the day, Borland made extensive use of these for Turbo Pascal and they worked well and looked good. As a bonus (like a web app) they can run easily on an external machine.
> I hate how it's not stick-to-your-finger fast.
I've used git for years and before it other source control systems. I don't remember a single instance where the slowness of the source control system was ever a factor.
I don't develop offline often these days, but I do recall a recent time where my power went out for a few hours. Since I was using git on my laptop, I just shrugged and kept working. Using git allowed me to continue working in a flow state that would have normally been interrupted by a dropped connection. Of course, if your project doesn't run locally, that point is moot. For others, in areas with more power/connection interruptions, the time savef would add up.
I don't develop offline, but I've developed the habit of making lots of tiny commits, then rebasing them into a single commit before pushing. To me, a commit is for something I might want to undo and a push is for something that I either want to share or am afraid of losing. I'm not suggesting this is the best way to use version control or even a good way -- just that it's a habit I've developed and don't really want to unlearn.
Honestly cannot tell at the first glance if it's a joke. "Welcome to Grace" feels like it obviously must be a joke. But then it's kinda too elaborate for a joke (and not very original one too). So…?
To me, now having two different commands which I need to distinguish means actually some mental overhead. I need to think about, is this really ready now, or only partially ready? Often I don't really know. I keep rebasing and editing my history of commits until I think they are in a good shape, and then I publish (make a PR).
grace promote seems easy to distinguish, if this is only intended for merges. Although I also don't see why it is such a problem to consider this also as a type of commit. Also, how are conflicts handled there? In Git, a commit always comes with a well defined state of the file tree, with one, or multiple parent commits (or also zero for the initial).
> Every save is uploaded, automatically
How does this handle files which are not yet tracked? Or all new files are automatically added? With Git, I always check first what files are added before I commit some new directory, to make sure I don't add files which don't belong into Git, like generated binaries or so.
> You can fix [the merge conflict] while you're in flow, and skip the conflict later.
To me, suddenly getting a merge conflict because of the background auto-rebase while working on some code sounds more like a distraction to get me out of the flow?
> Personal branches, not forks
So, you mean it's a centralized system? But does that mean, forks are not possible? Or when you fork, there are no good ways to work together anymore, e.g. merging branches from other forks?
> Grace will have a native GUI app for Windows, Mac, Android, and iOS. (And probably Linux.)
It sounds like Linux is not a priority now? This will turn off a lot of developers. I think for a new version control system, it makes more sense to have Linux as a priority, and Windows etc can come later.