회원등록 비번분실
환경,경제,생태



4차원 스마트시티



프롭테크  

작성자 스마트오토시티
작성일 2016년 11월 26일 토요일
홈페이지 http://cpagent.com
Link#1 google-deepmind-ai-lip-reading-tv (Down:132)
ㆍ추천: 0  ㆍ조회: 3018      
구글의 인공지능팀인 '딥마인드(DeepMind)'와 '옥스포드대학(Oxford)' 연구진이 사람의 입모양을 분석해내는 소프트웨어를 개발

구글의 인공지능팀인 '딥마인드(DeepMind)'와 '옥스포드대학(Oxford)' 연구진이 사람의 입모양을 분석해내는 소프트웨어를 개발했다고 합니다




http://www.theverge.com/…/13740798/google-deepmind-ai-lip-r…





Google’s AI can now lip read better than humans after watching thousands of hours of TV

The AI system annotated TV footage with 46.8 percent accuracy

Researchers from Google’s AI division DeepMind and the University of Oxford have used artificial intelligence to create the most accurate lip-reading software ever. Using thousands of hours of TV footage from the BBC, scientists trained a neural network to annotate video footage with 46.8 percent accuracy. That might not seem that impressive at first — especially compared to AI accuracy rates when transcribing audio — but tested on the same footage, a professional human lip-reader was only able to get the right word 12.4 percent of the time.




The research follows similar work published by a separate group at the University of Oxford earlier this month. Using related techniques, these scientist were able to create a lip-reading program called LipNet that achieved 93.4 percent accuracy in tests, compared to 52.3 percent human accuracy. However, LipNet was only tested on specially-recorded footage that used volunteers speaking formulaic sentences. By comparison, DeepMind’s software — known as “Watch, Listen, Attend, and Spell” — was tested on far more challenging footage; transcribing natural, unscripted conversations from BBC politics shows.




More than 5,000 hours of footage from TV shows including NewsnightQuestion Time, and the World Today, was used to train DeepMind’s “Watch, Listen, Attend, and Spell” program. The videos included 118,000 difference sentences and some 17,500 unique words, compared to LipNet’s test database of video of just 51 unique words.




DeepMind’s researchers suggest that the program could have a host of applications, including helping hearing-impaired people understand conversations. It could also be used to annotate silent films, or allow you to control digital assistants like Siri or Alexa by just mouthing words to a camera (handy if you’re using the program in public).




But when most people learn that an AI program has learned how to lip-read, their first thought is how it might be used for surveillance. Researchers say that there’s still a big difference in transcribing brightly-lit, high resolution TV footage, and grainy CCTV video with a low frame rate, but you can’t ignore the fact, that artificial intelligence seems to be closing this gap.




   
  0
3500
   
번호     글 제 목  작성자 작성일 조회
311 스마트시티라는 플랫폼) 정부, 스마트시티 시범도시 '세종·부산.. 스마트오토시티 2018-01-30 3707
310 삼성, 풀 브라우징 기능 갖춘 '멀티 터치폰' 출시 3차원쇼핑몰 2008-01-25 3698
309 SKT·미국 립모션, 실감형 멀티미디어 서비스 공동개발 MOU 체결.. 스마트도시 2015-10-17 3687
308 휴대폰 풀브라이징 네이버 웹서핑 정답은 가벼운 클릭가능 상호작.. [1] 3dCT팀 2008-01-25 3664
307 [스마트시티] U-City 건설법 → 「스마트도시 조성 및 산업진흥법.. 스마트오토시티 2017-03-03 3654
306 녹색성장 정책의 베스트 프랙티스 에코이메지네이션 2009-08-07 3611
305 [지방소멸 보고서] ④ "우린 이렇게 극복했어요"..일자리가 비결.. 스마트도시재생 2017-07-12 3582
304 건설사 줄도산 공포 `일파만파` [5] 3dCT팀 2008-07-15 3454
303 [휴대폰풀브라우징]휴대폰으로 보는 세상, 4배 넓어진다 [1] 3차원쇼핑몰 2008-01-25 3448
302 고양시, 수도권 서북부 ‘르네상스’ 개막 4차원인터넷도시 2008-09-04 3436
301 부산서 ICT 트렌드 공유의 장 열린다 'U-IoT 월드 컨벤션' cpagent 2015-12-01 3398
300 태양광·풍력·LED·전력IT…‘녹색 도로’ 달린다 씽씽~ 에코이메지네이션 2008-10-06 3352
299 터치, 햅틱 등 최신 모바일 UX 공유의 장 열려 4차원인터넷도시 2008-04-22 3345
298 3월 11일 전자신문] "모델하우스도 가상현실로 대체한다" 3차원인터넷도시 2008-03-31 3305
297 분양가가 '최고 경쟁력'이다 3D모델하우스 2008-08-09 3266
296 080311] "모델하우스도 가상현실로 대체한다" [3] 3dCT팀 2008-06-27 3192
295 SKT·BMW, 세계 최초 5G 커넥티드카 시연 스마트오토시티 2016-11-15 3190
294 [휴대폰 풀브라이징] 휴대폰으로 유선 웹 서비스 똑같이 이용한다.. [1] 3차원쇼핑몰 2008-01-25 3092
293 구글의 인공지능팀인 '딥마인드(DeepMind)'와 '옥스포드대학(Oxf.. 스마트오토시티 2016-11-26 3018
292 고맙다 포켓몬..부활한 AR, VR과 함께 '게임체인저' 될까 [포켓몬.. 스마트오토시티 2016-07-15 3003
12345678910,,,20