Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Google Software The Almighty Buck

GV, Formerly Known as Google Ventures, For Years Has Used an Algorithm That Effectively Permits or Prohibits Both New and Follow-on Investments (axios.com) 74

Dan Primack, reporting for Axios: When most venture capitalists want approval to make a new investment, they go to their partners. When venture capitalists at GV do it, they go to something called "The Machine." Axios has learned that the firm, formerly known as Google Ventures, for years has used an algorithm that effectively permits or prohibits both new and follow-on investments. Staffers plug in all sorts of deal details into "The Machine" -- which is programmed with all sorts of market data, and returns traffic signal-like outputs. Green means go. Red means stop. Yellow means proceed with caution, but sources say it's usually the practical equivalent of red. It was initially designed and used as a due diligence assistant that could be overruled but, according to three sources, it has evolved into a de facto investment committee.
This discussion has been archived. No new comments can be posted.

GV, Formerly Known as Google Ventures, For Years Has Used an Algorithm That Effectively Permits or Prohibits Both New and Follow

Comments Filter:
  • I, for one, welcome our new....wait, what?

  • i am just surprised that 42 is not an optional answer.

  • by bogaboga ( 793279 ) on Thursday July 19, 2018 @12:01PM (#56975150)

    When venture capitalists at GV do it, they go to something called "The Machine." Axios has learned that the firm, formerly known as Google Ventures, for years has used an algorithm that effectively permits or prohibits both new and follow-on investments

    You see, I confidently believe that you all agree that Google's messaging paradigm is in and remains in "chaos/confusion/mayhem/tumult land" right now.

  • by orev ( 71566 ) on Thursday July 19, 2018 @12:03PM (#56975168)
    It doesn't really matter unless there is data on whether the system works or not. Nothing in the article seems to mention this.
    • My brother did the same thing with Excel. Green, the project was good. Red, the project was a no-go. He designed the criteria himself, so there was nothing groundbreaking

      It was just a way to keep himself honest. Too many times, he got too emotionally invested in a transaction and lied to himself about the numbers. By deciding on the criteria beforehand and codifying the rules in a spreadsheet, it helped him see things more clearly and it made it easier for him to walk away.

    • by dj245 ( 732906 ) on Thursday July 19, 2018 @02:42PM (#56976158) Homepage

      It doesn't really matter unless there is data on whether the system works or not. Nothing in the article seems to mention this.

      This is actually 2 questions. Whether it worked in the beginning, and whether it works now.

      There is presumably no data available for initial or follow-on investments that were not made. The investment either wasn't made at all, or it was invested by someone else so Google probably doesn't have good or any data. Therefore, any adjustments to the algorithm based on greenlighted projects that then failed will probably overfit due to incomplete data on yellow or redlighted projects that could have succeeded. Unless Google deliberately invests in Yellow or Red projects to test the algorithm, the algorithm may get more and more conservative over time.

    • Well, we know it doesn't have any inputs about, "Would it piss our users off if we change course abruptly? Did we make any representations to users about if they should use this, or avoid it because it is temporary?"

      Or, to put it in terms bean-counters can understand: It doesn't even have an account to track Goodwill provisionally earned from past decisions!

  • by fahrbot-bot ( 874524 ) on Thursday July 19, 2018 @12:05PM (#56975184)

    ... an algorithm that effectively permits or prohibits both new and follow-on investments.

    My parents, friends, wife, etc... sometimes say "yes" or "no" to things I want to do and/or do again.

    [ Is the problem here that's it's an "algorithm" (presumably written by people) or that it made decisions? ]

    • by suutar ( 1860506 )

      from the summary, it sounds like the concern is that it's being treated like an infallible oracle. It may not have come up with a significant false positive yet, but there's no way to tell if it's had false negatives, and unlike human advisors it's probably not very good at explaining why it came to a given conclusion (unless you're willing to trace the code while it processes something).

      • by mccalli ( 323026 )
        An infallible scapegoat. If it says no and you proceed, your judgement on the line. If it says yes and you proceed but it fails...well, you did all your diligence, right? Not your fault...
      • I'm not even sure human advisers are particularly good at explaining their rationale. Sure they can give you an explanation, but I'm not sure if it's actually any good. Silicon Valley angel investment has terribly low success rates, even among investors that tend to outperform the average. I suspect it has less to do with picking winners and more to do with having the winner you pick providing massive returns.

        If as you point out the only concern is just how good the system is, just consider every case wh
      • ... infallible oracle ...

        Words not often used in the same sentence - no matter how much Larry Ellison would like it.

      • there's no way to tell if it's had false negatives

        if GV declines to invest but other VCs invest successfully, then that is a clear false negative.

        it's probably not very good at explaining why it came to a given conclusion

        As long as it works, it isn't really necessary to get an explanation. Plenty of decisions are based on statistical analysis of outcomes, rather than understanding complicated internals.

        A human VC may give you a long winded rationale for an investment, but the real reason is that he is having an affair with the target's CEO, or that he made a side investment with his own money. The algorithm works because it byp

      • by rgmoore ( 133276 )

        False negatives are much less of a big deal in investing than false positives. One of the basic problems with venture capital is that there are way more investment opportunities than any firm has money to invest. That means you're always going to have missed opportunities no matter how smart your decision making process is. As long as you find enough good opportunities, it doesn't really matter than you missed others.

        • Comment removed based on user account deletion
        • False negatives are much less of a big deal in investing than false positives.

          So you're saying that $1 of opportunity cost is less than $1 of operational losses?

          If you said the exact opposite, it would be more true, because $1 of losses is a write-off, and $1 of opportunity cost is not. Opportunity cost is market share you'll never have another chance at!

      • Traditionally, an Oracle is a religious designation for a type of person who can listen to the words of the Gods, and translate them to human speech.

        You don't do what the Oracle tells you because the Oracle is infallible, for the Gods are themselves neither infallible, nor guaranteed to be honest. You do it because you traveled all the way to the temple, already made the required donation, and it will make the Gods angry if you do not then heed their words. That's true regardless of if they chose to give yo

        • by suutar ( 1860506 )

          Fair point. Unjustified reverence for computer output has been common for long enough that I assumed that to be the case and did not consider that they might think it could be wrong but it's not worth the effort to disagree.

    • by sinij ( 911942 )
      The problem is that we replacing irrational but possible to understand human decision making with an opaque algorithm written by irrational humans.

      It is not as easy to detect and correct flaws in decision making.
  • color == green
  • Juicero (Score:4, Interesting)

    by Swistak ( 899225 ) on Thursday July 19, 2018 @01:10PM (#56975590) Homepage
    Wasn't that the fund that put shittons of money into a Juicero?
    • From: https://www.bizjournals.com/sa... [bizjournals.com]

      The San Francisco-based company is backed by plenty of Silicon Valley investors including Kleiner Perkins Caufield & Byers, GV and DBL Partners. Other investors include Thrive Capital, Campbell Soup Company, Acre Venture Partners, Two Sigma Ventures, and First Beverage Group.

      Nice job 'redistributing the wealth', Juicero. But that's giving Juicero too much credit. They were apparently not 'smart cynics', but were 'dumbass vegan true believers'. They apparently sp

      • Comment removed based on user account deletion
        • If you adapt it into a new Stargate spinoff it might be good.

          I'd definitely rent the DVD from the library in 5 years, or just download it.

          --NoTVGuy

      • I thought the purpose was to redistribute high quality machine parts of a size useful to the maker community to the salvage market?

  • by Colin Castro ( 2881349 ) on Thursday July 19, 2018 @01:14PM (#56975622)

    People are often afraid to make their own decisions. I'm not surprised by this though considering how often banks rely on a computer program to approve or deny each of their products. People make almost zero decisions at many levels.

  • ... that this is just a spreadsheet?

The computer is to the information industry roughly what the central power station is to the electrical industry. -- Peter Drucker

Working...