Facebook Research Releases Tech To Create 3D Models of People From Photographs (github.io) 19
Facebook Research has released technology to create 3D models of people from photographs, reports shirappu: The technology, called PIFuHD, takes photographs of people and reconstructs them in 3D. The tech works on deep neural networks with multi-level architecture that allows for high resolution and accuracy in 3D models even at low levels of memory. More is available in the detail-heavy research paper.
Applications for this kind of automated image digitization include medical imaging and virtual reality, and the researchers have released a version of the model for users to try out themselves on Google Colab.
Applications for this kind of automated image digitization include medical imaging and virtual reality, and the researchers have released a version of the model for users to try out themselves on Google Colab.
Ugh (Score:2, Funny)
Given a high-resolution single image of a person,we recover highly detailed 3D reconstructions of clothed humans at 1k resolution.
Useless.
Re: (Score:2)
Re: (Score:2)
This is no time to discuss bilocation.
Re: (Score:2)
unclothed fork incoming
1)Add in video
2) a little refinement
3) porn video of any one you chose
4)Profit
Re: (Score:2)
It's interesting that the expected deepfake porn apocalypse never materialized. Seems that it just wasn't good enough, too uncanny and glitchy, bodies didn't match etc.
I expect this will be the same. Even if the results were photorealistic they would need to be convincingly animated too. Well I suppose with static shots they could do something but given that nude celebrities couldn't keep Playboy going...
Re: Waste of time... (Score:2)
I would love to see something like this if it were reliable. But I feel like it would be a significantly harder challenge.
I am just as concerned about false positives as I am about finding these sickos. I guess arguably it would depend on the outcome. I wouldn't want an innocent person being convicted because some AI really thinks they are guilty, but if it really came down to a ban which had a way to appeal it might not he so bad.
There is just that thing where there are some things you can be accused of
Re: (Score:2)
Now if only they would use their damn resources to help find child sexual and physical abuse victims in images pedos and assholes upload. But no, that's too much work and they want to be lazy and do nothing.
Well, you went to full weirdo from a standing start there. I mean, there's a lot of people putting a lot of effort into exactly that. Why it was the first thing you thought of when someone produced what is effectively a new CGI method for Hollywood is hard to see.
You do understand that this isn't magic, don't you? The "rear view" of the models is entirely conjectural and the goal is for it to be plausible, not accurate.
Stay calm and smile (Score:2)
People will take pictures of friends and family, turn them into 3D and perform all sorts of jokes on them. Others will use it for body-shaming and to make cruel jokes. Best is to start taking pictures off the Internet when you don't have thick skin and a sense of humour.
Re: (Score:2)
Re: (Score:2)
Take someone's picture, turn it into 3D and with a 3D printer make a dildo from it. ... Yeah, that's going to happen to a lot of people soon.
Again, racial bias in machine learning (Score:1)
Re: (Score:2)
Any mistakes the algorithm makes with white people and it's going to look like cotton candy, any mistakes it makes with black people and it's going to look like turds. So it's probably for the best.
Re: (Score:2)
Faint shadows are key - pale is best for demos (Score:2)
That's unfortunate, and I hope they have various complexions in their full set.
Also, I'm quite sure they use their BEST results for the demo. When you're inferring 3D shapes from slight shadows, the lightest possible base color will provide the most impressive results.
Annie May (Score:2)