Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software Apache

Subversion 1.8 Released But Will You Still Use Git? 378

darthcamaro writes "Remember back in the day when we all used CVS? Then we moved to SVN (subversion) but in the last three yrs or so everyone and their brother seems to have moved to Git, right? Well truth is Subversion is still going strong and just released version 1.8. While Git is still faster for some things, Greg Stein, the former chair of the Apache Software Foundation, figures SVN is better than Git at lots of things. From the article: '"With Subversion, you can have a 1T repository and check out just a small portion of it, The developers don't need full copies," Stein explained. "Git shops typically have many, smaller repositories, while svn shops typically have a single repository, which eases administration, backup, etc."'" Major new features of 1.8 include switching to a new metadata storage engine by default instead of using Berkeley DB, first-class renames (instead of the CVS-era holdover of deleting and recreating with a new name) which will make merges involving renamed files saner, and a slightly simplified branch merging interface.
This discussion has been archived. No new comments can be posted.

Subversion 1.8 Released But Will You Still Use Git?

Comments Filter:
  • yup (Score:3, Funny)

    by Anonymous Coward on Wednesday June 19, 2013 @09:26AM (#44049459)

    short and simply, yes. If Linus says so, we obey.

  • git --bare init

  • I really started using github regularly at the same time that I started using github.

    It's grown to so much more than the underlying version control software.

  • by WuphonsReach ( 684551 ) on Wednesday June 19, 2013 @09:27AM (#44049473)
    Whether you use git/mg/etc (distributed VCS) or centralized VCS systems (SVN, etc) has a lot to do with the level of control that you desire/need and how much centralization you desire/need.

    For some development projects / communities, where everyone is independent and rarely connected to a central point, the distributed VCS make more sense. The downside is that you have to rely on developers to push their changes to some "master". On the other hand, it means they can work offline / disconnected.

    For the less technical users, centralized VCS like SVN makes more sense. As long as you can get them to commit the changes, you're ensured that those changes are now on a server/machine that is getting backed up and taken care of.
    • Heretic! Real programmers use Git. Nobody else gets to sit at the cool kids' table.
    • by Anonymous Coward on Wednesday June 19, 2013 @10:47AM (#44050509)

      I find that using git-svn [kernel.org] with a central SVN repository results in a very nice workflow. In my local git repository, I'm free to develop in a nonlinear, exploratory fashion. At logical stopping points, this work gets neatly reconciled with reality in the form of sensical SVN commits. It's the best of both worlds, and allows me to do useful work even on an airplane.

  • by cold fjord ( 826450 ) on Wednesday June 19, 2013 @09:29AM (#44049489)

    Going through the article it looks like a nice set of improvements. I expect that subversion users will be pleased with both the current improvements, and what will be built upon them in the future.

    Among the useful improvements noted [developer.com]:

    One of the area where robustness has been improved is in the storage of metadata. SVN now tracks the moves of working copy items. Stein noted that the harder part is getting the rest of the system to recognize the moves, and that work is ongoing. He explained that from a historical perspective, SVN didn't "move" items per se. Instead, the item was copied to its new location, and deleted from the old.

    "This is problematic (for example) because if an edit comes in from the server for that deleted item, then we don't know what to do with it," Stein said. "For a moved item, then we know the edit should probably be applied to wherever the thing was moved."

  • My only complaint of SVN and the reason I moved to git.
    • It sounds like the proper support for renaming that's new in this release is a step in the right direction. I assume branching is still really copying in Subversion though, which I recall being problematic in earlier versions where you needed to know at what point you'd branched from to do merges.
    • by RaceProUK ( 1137575 ) on Wednesday June 19, 2013 @09:40AM (#44049617)
      Not used SVN for a few years, but I've merged branches several times with it. Not sure what you're trying to say.
      • by ebno-10db ( 1459097 ) on Wednesday June 19, 2013 @10:11AM (#44050037)

        Pardon me reposting part of what I wrote above, but I think it explains what he was talking about:

        It's ok to create a branch (say from trunk), work on the branch, and even update it from trunk, but you're pretty much limited to a one-time reintegration merge from that branch back to trunk. You can't easily go back and forth, choosing to put one new thing from trunk into branch and vice versa. That becomes a serious pain if, for example, you use trunk for new development and a release branch on the side. The natural way to work is, if a bug is found in the release, fix it in the release branch and then merge that one change back into trunk. Similarly you may fix a bug in trunk and realize you should also merge it into the release. Svn doesn't really let you do that, so I have to tell people to always make the fix in trunk and merge that one change to the release.

        • by phantomfive ( 622387 ) on Wednesday June 19, 2013 @10:47AM (#44050525) Journal

          The natural way to work is, if a bug is found in the release, fix it in the release branch and then merge that one change back into trunk.

          I hate this version of work......I have some coworkers who always do it. They occasionally forget to merge something back into head, which means they have to revisit the same problem in a few months. If you're going to miss a merge, it's better to miss the merge back into the branch, rather than into head; because as long as it's merged into head, it'll be fixed eventually.

          • I hate this version of work......I have some coworkers who always do it. They occasionally forget to merge something back into head, which means they have to revisit the same problem in a few months.

            I've found that a good way to prevent that from happening again is to summarily execute the offender and hang his or her corpse from the flag pole as a reminder to others.

            There's also Murphy's Law corollary that says that no matter what your preferred workflow is, there will come a day when doing it differently is necessary or expedient. For example, trunk may be temporarily unstable, which means you wind up fixing the bug on the branch. Of course trunk is never supposed to be unstable, but there are a lo

        • If you find a bug on a release, you fix it on the trunk, test it on the trunk for a while, then merge it out onto the (supposedly stable) release.

          That way if it turns out the fix is bad, you haven't messed up the release more than it already is.

    • Yes! That is the biggest problem with Subversion, and I hope that 1.8 fixes that as promised (although they're only promising partial support right now).

      At work one of my side jobs is managing the VC for our small (5 person) dev team. For a small closely knit team centralized VC is fine, but the very limited merge capabilities of svn drive me nuts. It's ok to create a branch (say from trunk), work on the branch, and even update it from trunk, but you're pretty much limited to a one-time reintegration merg

      • by stsp ( 979375 )

        It's ok to create a branch (say from trunk), work on the branch, and even update it from trunk, but you're pretty much limited to a one-time reintegration merge from that branch back to trunk.

        This limitation has been lifted in 1.8. As long as you use no merge commands more complex than

        svn merge ^/branch/to/merge/from

        the merge works in either direction and should never flag spurious conflicts caused by changes being applied more than once.

        There were ways to work around deleting a reintegrated branch [red-bean.com] in 1

    • by hondo77 ( 324058 )
      I've been merging to/from branches in svn for years. I did a branch-to-branch merge just the other day. Piece o' cake.
  • Sparse checkouts (Score:5, Informative)

    by IanCal ( 1243022 ) on Wednesday June 19, 2013 @09:33AM (#44049535)
    While a little more work than with svn, you can checkout just part of a repository: http://jasonkarns.com/blog/subdirectory-checkouts-with-git-sparse-checkout/ [jasonkarns.com]

    Although I'd argue that if this is a common problem then you probably have several sub-projects.

  • by goruka ( 1721094 ) on Wednesday June 19, 2013 @09:33AM (#44049543)
    While GIT expresses the distributed development nature of open source projects much better nowadays, SVN fits the workflow of enterprise projects much better:

    -SVN has much better visual tools and is simpler to operate
    -SVN has a simpler merge policies which are friendlier when there isn't a central person pulling the changes.
    -SVN is very friendly for projects with a lot of binary objects (ie videogames)
    -SVN allows different people to work on different directories individually, GIT doesn't.
    -SVN has fine grained permissions, access and authentication controls, very useful when parts of your project (ie, APIs) are under NDA or you don't want them to leak.

    They are different systems with different scenarios in mind, comparing them or claiming that GIT is killing SVN is just ignorance.
    • by Bill_the_Engineer ( 772575 ) on Wednesday June 19, 2013 @10:07AM (#44049969)

      While GIT expresses the distributed development nature of open source projects much better nowadays, SVN fits the workflow of enterprise projects much better:

      Actually... git fits the workflow better than svn. I have to manage a project that spans multiple institutions and two continents. Instead of forcing everyone to use VPN while they develop, they only need to use VPN to push to the official repository.

      SVN has much better visual tools and is simpler to operate

      What? I use SourceTree on OS X and my coworkers on Windows like TortoiseGit. Also there is "git gui"

      SVN has a simpler merge policies which are friendlier when there isn't a central person pulling the changes.

      What? I don't think you understand how git works.

      SVN is very friendly for projects with a lot of binary objects (ie videogames)

      Not necessarily. We use both svn and git to manage very large BLOBs and I haven't seen any noticeable differences. I have people that version control gigabytes worth of design documents that are stored in binary format and I haven't heard any complaints from them.

      SVN allows different people to work on different directories individually, GIT doesn't.

      We used to think this was a big deal, but the advantages that git has over svn more than made up for this.

      -SVN has fine grained permissions, access and authentication controls, very useful when parts of your project (ie, APIs) are under NDA or you don't want them to leak.

      What? First not by default. The most popular method https+ssh does not. You can use Crowd to make it a little easier.

      Whereas in git I use gitolite. I manage their public keys and assign privileges based on the public key. Keep the NDA (or more importantly ITAR) in a separate git repository which makes life easier all around and satisfies the regulators too. They weren't too comfortable with trusting the single repository to handle the compartmentation correctly.

      They are different systems with different scenarios in mind, comparing them or claiming that GIT is killing SVN is just ignorance.

      I operate both SVN and GIT systems. My anecdotal evidence show that most of my projects left SVN and went with GIT due to its distributive nature. We have operational processes in place that eliminates the need for physical enforcement of a centralized repository. As an extra bonus, my co-developers like the ability to check in while they develop and then push the changes once they are confident that it won't break the build on the official repository.

    • I'm sure this will be an unpopular opinion, but I seriously loath git. Its distributed development model is nice for certain types of projects, but aside from that, doing almost anything in git is more complicated and more work than doing the same thing in svn.

      I think the fundamental architectural flaw in git is that it insists on a 1:1 correspondence between repositories and working copies. If you want a working copy, you absolutely must have a unique repository to go along with it, and that repository m

  • by Nutria ( 679911 ) on Wednesday June 19, 2013 @09:34AM (#44049547)

    Who has a 1TB repository? Even 200GB is 15 shades of ginormous.

    • by jez9999 ( 618189 ) on Wednesday June 19, 2013 @09:35AM (#44049575) Homepage Journal

      Who doesn't store their DVD collection in source control?

      • Re: (Score:2, Funny)

        by Anonymous Coward

        Only for George Lucas Star Wars revisions...

        • Lucas doesn't need version control for his Star Wars changes. Once he's got the new one, he throws all of the old versions away.

    • Re:1TB repository? (Score:4, Informative)

      by Xest ( 935314 ) on Wednesday June 19, 2013 @09:46AM (#44049697)

      Lots of places really.

      Some companies use it for versioning of content as well as just source code and that may mean archiving raw versions of said content such as images, 3D art assets, uncompressed audio and so forth.

      • Re:1TB repository? (Score:5, Informative)

        by Anonymous Coward on Wednesday June 19, 2013 @10:04AM (#44049937)

        On the last games I worked on, a minimum initial sync to build was around 50GB, full sync was over 150GB (MANY different language versions of NIS movies.) I have no idea how big the revision database was but I'm going to guess freakin' huge with over 600000 commits during the project. The BlueArc backing Perforce was bretty large.
        Git would explode in a cloud of fail if you tried to do anything like that, in my opinion it's for toy sized projects.
        Where I work now (at a semiconductor company) the people calling the shots switched from Perforce to Git and it truly sucks. There's over 400 repositories to build the project managed by a nasty set of shell scripts. What I wouldn't give to have a single place to manage all of the changes.

    • by Anonymous Coward on Wednesday June 19, 2013 @10:19AM (#44050145)

      I worked in game programming several years back, and 1TB was quite reasonable. Branching meant IT would bring everyone a new hard drive to store the new branch on.

      IIRC the client for one of the MMOs I worked on was a 20 gig download. The source code that actually went into that... was big, but couldn't possibly have been a gig of raw text before being compiled; the art on the other hand, was sliced, diced, squeezed and compressed to get it down into that small of a download. Art Is Huge. Especially art in the raw forms -- even if the final game art is going to be some horribly compressed format to save space, artists want to record the initial sound files in pristine high bitrate forms, and want to do the initial drawings in zillion-pixel-wide, maximum-color-depth formats, and then compress later. The intro-video that nobody watches alone probably took up more space in the repo than all of our source code files, because it was rendered for impossibly good monitors. So, in our repo, we have both the compressed MP3-low-bitrate voiceovers that go into the game, and also the uncompressed-perfect-form from the recording sessions (just in case we want to recompress to MP3 later so we can have a higher bitrate... or maybe we'll swap to using ogg next year... or just for historical interest? it's a repo, you check stuff in...) And similarly for the textures -- the original photoshop files at maximum size and color depth are gorgeous... and then there's the smooshed version you get on your computer. But we have to store the maxumum size one, because that's the one that we're going to edit if we make a change! And it's version-control, so the repo has this hard-to-compress binary (trust me, photoshop files don't compress nearly as well as python files), possibly in a dozen different versions because all of your art got reviewed and edited as it passed through various layers of management and licencees... And then of course there's video too -- cutscenes and intro video and such.

      There's no chance that you could get a repo like that to work on git. We used perforce rather than svn; perforce is (or at least was at the time) the popular tool in the gaming industry for source control (it's expensive, but stable and has good support for massive repos), but I can see lower budget places going for svn. Git just isn't designed for huge repos full of binary blobs.

      • I worked in game programming several years back, and 1TB was quite reasonable. Branching meant IT would bring everyone a new hard drive to store the new branch on.

        Git just isn't designed for huge repos full of binary blobs.

        Apparently neither is Perforce/SVN. More to the point, I am aware of no sound basis for that statement. Git does binary diffs just fine, and compresses the data when reasonable, and stores file deltas where that is more efficient. Cherry-picking a change to a binary file sounds painful, but it's likely that's a stupid idea that would not be tenable in any RCS. If there is some secret limitation to how git stores files, I'd love to hear about it. My suspicion is that this "svn is better for large repos" idea

  • by ideonexus ( 1257332 ) on Wednesday June 19, 2013 @09:41AM (#44049625) Homepage Journal

    I started using Git last year for my personal projects. It's a fantastic platform for coding as a social-network. I love that I can grab code I need from other developers around the world, tweak it, and send it back with a few suggestions. I love that I can follow other projects without having to get involved. Git is awesome.

    That being said, we still use SVN for our internal development. The WYSIWYG interface of Tortoise is simply really comprehensive. I realize that Git offers more options, but if those options aren't available with a simple right-click, then I don't have the time for them. Tortoise SVN makes everything readily available, while Git makes me run to the command line too often.

    • by ameen.ross ( 2498000 ) on Wednesday June 19, 2013 @09:54AM (#44049815)

      TortoiseGit? [google.com]

    • Do you also like the tree conflicts you get when moving directories around in your project? Those are my favorite thing in SVN. In theory, they've made this better in the new version but I'll believe it when I see it myself. That's something about SVN that just really pisses me off.

      • by stsp ( 979375 )

        Do you also like the tree conflicts you get when moving directories around in your project? Those are my favorite thing in SVN. In theory, they've made this better in the new version but I'll believe it when I see it myself. That's something about SVN that just really pisses me off.

        Tree conflicts [fu-berlin.de] are inherent to any version control system, not just Subversion.

        People complain about Subversion's tree conflict handling a lot. I believe this is because development work done so far was only about detection of t

  • Git is auto-backup (Score:4, Insightful)

    by mathimus1863 ( 1120437 ) on Wednesday June 19, 2013 @09:51AM (#44049773)
    For the same reason the summary complains about users having to clone the entire repo, you don't really have to deal with backups: the entire repo is already backed up by every single one of your users. In fact, this is one reason I use git, because I know that I don't have to worry about backing up. I can sync with github, and know that if github disappears, my code history doesn't go with it.
    • For small repos that's fine, it's a feature. For large repos it's not fine, it's a serious problem. I literally cannot fetch the Android-x86 repo. git won't just finish sending the old stuff if new stuff has appeared since your last attempt to fetch the repo.

      This wouldn't be as big a problem if people didn't think that git access was a substitute for a tarball, so that if you just need to build the fucking code you can do so.

      • by Anonymous Coward on Wednesday June 19, 2013 @10:17AM (#44050123)

        For small repos that's fine, it's a feature. For large repos it's not fine, it's a serious problem. I literally cannot fetch the Android-x86 repo. git won't just finish sending the old stuff if new stuff has appeared since your last attempt to fetch the repo.

        This wouldn't be as big a problem if people didn't think that git access was a substitute for a tarball, so that if you just need to build the fucking code you can do so.

        Use 'git clone --depth 1' if you only want the most recent revision. It will allow you to update it from upstream but obviously you won't be able to go back into the history.

        • Use 'git clone --depth 1' if you only want the most recent revision. It will allow you to update it from upstream but obviously you won't be able to go back into the history.

          this is very useful information; thanks. too bad git doesn't do this first, and then fill in the history later. that would make it dramatically more usable for the average human.

  • I've been using bazaar [canonical.com] and it's everything subversion is, but better. It's had true renames since I've been using it and it really knows how to merge branches properly (which were the big big problems I've had with subversion).
  • For personal stuff, or things I intend to share haphazardly or widely, I go git all the way. Git is great for when I need to work disconnected and still keep revisions. Its simple, its fast, its easy to make a quick repo, so quick sometimes I make one for things I wouldn't otherwise, because even if it never gets used again, the overhead of turning a directory into a git repo is trivial.

    That said, there are times when i have other requirements. Things that are going to need to be tracked long term, be share

  • On the flip side, Subversion stores a pristine copy of every file to avoid a network round trip when doing stuff like diffs. So if you have a 500MB working directory, it's backed by another 500MB worth of pristine copy. A typical Git clone can hold the entire history of the project in less space than that because it is packed down during the clone operation.

    But definitely the ability to check out just a few folders or files is an advantage of CVS and Subversion. Git forces everything to be cloned although

  • Decent comparison (Score:5, Informative)

    by ameen.ross ( 2498000 ) on Wednesday June 19, 2013 @10:05AM (#44049953)

    I'll just leave this here so people get to see a comparison that's actually useful.

    https://git.wiki.kernel.org/index.php/GitSvnComparison [kernel.org]

  • We work on lots of asset rich projects - often the PSDs folder will span many (hundreds!) gigabytes for a small project, and we'll be working on multiple projects at once. The PSDs then often have source imagery and a whole load of clutter collected from clients. SVN is superb at dealing with this scenario, since as a coder I don't fill my machine with sources I don't need - but we need them archived and backed up none the less. This is especially a problem if you're a developer with a fancy small SSD and
  • We use Microsoft VSS. No, don't hit me!

    I get punched in the face several times a day already.

  • Misleading summary (Score:5, Informative)

    by stsp ( 979375 ) on Wednesday June 19, 2013 @10:26AM (#44050217) Homepage

    I'm a Subversion developer and would like to clarify this bit of the summary:

    Major new features of 1.8 include switching to a new metadata storage engine by default instead of using Berkeley DB, first-class renames (instead of the CVS-era holdover of deleting and recreating with a new name) which will make merges involving renamed files saner, and a slightly simplified branch merging interface.

    The "new metadata storage engine" probably refers to FSFS which has been the default repository backend since Subversion 1.2. FSFS has been improved since then, and 1.8 contains some new improvements (such as directory deltification) but does not add a new repository backend. The BDB-based backend is the one from Subversion 1.0 and is rarely used these days.

    Subversion 1.8 doesn't contain support for "first-class renames". Renames are still modeled as copy+delete [apache.org], with special annotations. The working copy is so far the only part of the system which is aware of moves. There are plans to make other subsystems aware of moves in future releases. Also, while tree-conflicts involving local moves can now be auto-resolved after 'svn update', 'svn merge' mostly behaves as it did in 1.7, expect there is no need to use the --reintegrate option and tree conflicts are now flagged if a directory was renamed or deleted on the merge source branch. Whereas Subversion 1.7 would unconditionally perform the deletion in the merge target.

  • first-class renames (instead of the CVS-era holdover of deleting and recreating with a new name)

    Has SVN missed the boat? Basic features that were desirable 10 years ago lead people to use BitKeeper, arch/arX/tla/Bazaar, and eventually Hg, monotone and Git. (I guess some folks were using Darcs too). Now that people have tackled the learning curve and the lack of tools. These days many of these alternatives to CVS and SVN have support in numerous IDEs and desktop environments

    It is probably great news for people still using SVN. But why should we keep going in that direction? I also wonder why Perforce i

  • It's enterprise software and we were actually informed by Perforce after maybe 10 years on a license that since our user base is so small we can use it for free. I think the cutoff is based on the number of licenses per server. I've never needed my own personal SCM, but I'd go w/ Perforce if I never needed one.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...