Monday, August 21, 2017

Toolbox

  • Thursday, Aug. 17, 2017
Playbox Technology demos at IBC to feature its CloudAir and Neo platforms
Playbox Technology's CloudAir platform
LONDON -- 

PlayBox Technology will demonstrate complete broadcast playout solutions leveraging its cloud-based CloudAir and server-based Neo platforms at the upcoming IBC-2017 exhibition in Amsterdam from Sept. 15-19. Hybrid configurations combining the strengths of both platforms will also be shown.

“Broadcasters today are demanding speed and flexibility in the way they set up and manage their services,” said Don Ash, president of PlayBox Technology. “Partnership agreements between PlayBox Technology and an increasing number of communication service providers have made CloudAir more accessible than ever to existing and would-be broadcasters throughout the world. CloudAir eliminates the need for channel managers to wait for new technical hardware to be delivered, installed and commissioned. Available on a fast-startup software-as-a service basis, CloudAir forms a basis for highly efficient broadcasting via terrestrial, satellite and dedicated cable wherever and whenever these are the channel management’s preferred delivery media. It makes the process of starting a new channel as simple as making a phone call, either direct to their preferred service provider or via the global network of PlayBox Technology support offices.”

“CloudAir also gives content owners the ability to start purely IPTV-based channels at very short notice, accessible to online viewers in any country.  IPTV channels can be operated to a published schedule or as viewer-specific time-buffered video-on-demand,” added CEO Pavlin Rahnev. “Channel managers can control the whole process of branding and playout via a secure link from a desktop or even a laptop computer. They can upload content via the same link ahead of transmission while retaining the freedom all broadcasters appreciate to add late-breaking stories such as news as additions to the playout schedule. Entire channels can be operated this way without managers needing to own, accommodate and maintain dedicated hardware. We will also be demonstrating the ease with which CloudAir can be integrated with our established Neo server-based product series to form a hybrid of onsite and offsite channel management and playout resources. An increasing number of Neo customers are already seeing the advantages CloudAir offers as a remote disaster-recovery solution and as a medium for single-event OTT or full 24/7 fast-startup television channels.” 

Among new CloudAir features making their IBC debut will be a transcoder capable of handling multiple file wrappers and formats including MPEG PS/TS, MXF, QT, AVI, MP4, GXF, MPG2, H.264, ProRes, DNX HD and MJPEG. Also being introduced to European broadcasters are an enhanced graphics editor template preparation interface, improved playlist editing, advanced playlist export to EPGs and automated linking of stored assets.

A new addition to the Neo platform, Neo TS IP Stream Delay, will make its maiden exhibition appearance. Occupying a standalone 1U chassis, Neo TS IP Stream Delay provides fully transparent delay of IP transport streams such as DVB/ATSC MPEG broadcast-quality compressed video and audio for single or multichannel time zone shift and disaster-recovery applications. Designed for fully automated operation, it can be configured with multiple input channels and multiple delayed outputs. Each input also has one zero-delay output. All operating parameters are easily adjusted via an integral web-based user interface, including channel-specific time delay in 15 second increments. Maximum delay duration depends on input bit rate and storage capacity. Additional features include programme information display of MPEG-compliant transport streams plus automatic error logging.

Over 40 new features for other modules in the Neo series will be introduced at IBC2017. These include the ability to integrate ProductionAirBox Neo closely with the Associated Press ENPS news production system via MOS gateway. Among other additions to the capabilities of PlayBox Neo are extended control features, expanded file handling capabilities, greater input and output connectivity and Microsoft Windows 10 compatibility. These have all been implemented within an informative and intuitive graphic interface which is familiar to operators around the world.

PlayBox Technology Limited is an international communications and information-technology company serving the broadcast and corporate sectors in more than 120 countries. Over 17,000 TV and branding channels are powered by PlayBox Technology Limited broadcast solutions. Users include national and international broadcasters, start-up TV channels, webcasters, DVB (IP/ASI) TV channels, interactive TV and music channels, film channels, remote TV channels and disaster recovery channels.

  • Wednesday, Aug. 16, 2017
Timeline Television to showcase its IP 4K HDR OB truck at IBC
Timeline Television's newest OB truck
NEWBURY, UK -- 

Snell Advanced Media (SAM) announced that Timeline Television’s newest OB truck--the first IP 4K HDR truck in Europe--will be featured on its stand (#9A01) at IBC 2017. The truck, UHD2, seamlessly handles fully uncompressed 4K/UHD, IP and HDR.
 
A state-of-the-art, triple expanding OB truck, UHD2 is home to a range of SAM technology including two Kahuna IP production switchers, IP Multiviewers and with SAM’s IP infrastructure technology providing the backbone. Also in the truck for IBC, SAM’s LiveTouch 4K/UHD replay and highlights system will be used for demonstrations.
 
Timeline’s UHD2 is designed to support 32 Sony 4K cameras. Its two Kahunas enable SDR and HDR to be run simultaneously along with down converted HD outputs. The set-up allows production teams to work in VSF TR03 (SMPTE ST 2110 draft)--the first time this has been done in an OB truck--enabling Timeline to work with video and audio as separate essence flows within an IP workflow.
 
Daniel McDonnell, managing director at Timeline Television, said, “We worked closely with SAM to design a workflow based on the latest IP infrastructure and HDR technology available, providing customers with a highly scalable solution that can meet complex production requirements without the need to add additional OB support. Given the increased number of 4K cameras and replay positions that we wanted to support, IP made perfect sense and SAM’s technology even more so as it afforded us the maximum flexibility and scalability.”
 
Robert Szabó-Rowe, EVP and general manager, live Production and infrastructure, SAM, commented, “We’re really excited to have Timeline’s award winning UHD2 truck on our stand at IBC as it’s a tremendous showcase for our technology and testament to our close partnership with Timeline in delivering true market innovation. The truck offers a great opportunity for visitors to IBC to experience how IP is being used today in a real life scenario.”
 
Timeline Television’s McDonnell will be presenting a detailed case study on UHD2 within the IBC IP Showcase theatre (E106/107).

  • Tuesday, Aug. 15, 2017
Lineup of events, program details unveiled for SMPTE 2017 Annual Technical Conference & Exhibition
SMPTE Education VP Richard Welsh (l) and Pat Griffis, SMPTE EVP, attend the 2016 Annual SMPTE Awards.
WHITE PLAINS, NY -- 

Program details for the SMPTE 2017 Annual Technical Conference & Exhibition (SMPTE 2017), Oct. 24-26 in Hollywood, Calif., have been announced. SMPTE 2017 will fill two exhibit halls and multiple session rooms at the Hollywood & Highland Center, and the event will also feature an Oktoberfest reception, Broadcast Beat’s SMPTE 2017 Live! Studio, and special events culminating with the SMPTE Annual Awards Gala at the Loews Hollywood Hotel’s Hollywood Ballroom on Thursday, Oct. 26.

“We’ve got an incredible lineup of technical sessions scheduled for this year, and we’re rounding out the conference and exhibition with some popular events that were added last year,” said SMPTE Education VP Richard Welsh, CEO of Sundog Media Toolkit. “The timely topics and technologies discussed at SMPTE 2017 are sure to make a splash as the Society dives into its next century of standards development and education.”

SMPTE’s Annual Technical Conference & Exhibition explores media and entertainment technology. The conference and exhibition will follow the daylong SMPTE 2017 Symposium — “Artificial Intelligence (AI) and Machine Learning in Digital Media Creation: The Promise, The Reality, and The (Scary?) Future” — on Oct. 23. The Symposium is co-chaired by SMPTE Fellow Michelle Munson and Yvonne Thomas of Arvato Systems. Further details about the Symposium will soon be available. Events on Oct. 23 also will include the annual Women in Technology Luncheon, presented by SMPTE and Hollywood Professional Association (HPA) Women in Post, and the SMPTE-HPA Student Film Festival, which will highlight the creative use of technology to support the art and craft of storytelling. Tickets for the luncheon and festival are available separately or as add-ons to a SMPTE 2017 conference registration.

The SMPTE 2017 Technical Conference program committee is co-chaired by three SMPTE Fellows: Paul Chapman, senior vice president of technology at Fotokem; Thomas Edwards, vice president engineering and development at Fox; and SMPTE Education director Sara J. Kudrle, product marketing manager for playout at Imagine Communications. SMPTE 2017 itself will include the usual wealth of technical sessions, along with an array of special events that offer numerous opportunities for face-to-face interaction between attendees, exhibitors, and speakers.

The first day of the technical conference will feature special events, including the Fellows Luncheon, open exclusively to SMPTE Fellows and Life Fellows who have registered for the event, as well as the SMPTE Annual General Membership Meeting and Oktoberfest Reception, both open to all attendees with conference registration. On the second day, the Evening Reception will take place in the Ray Dolby Exhibit Hall. The SMPTE 2017 Annual Awards Gala on the third and final day of the conference will welcome registered guests on the red carpet and treat them to a reception and dinner honoring industry leaders. SMPTE 2017 will conclude with the Awards After-Party featuring the SMPTE Jam, which once again will feature a pickup band comprising a diverse group of SMPTE members playing popular hits — and possibly a few original pieces created for the occasion.

Technical conference sessions throughout all three days of SMPTE 2017 will delve into the industry’s most innovative, intriguing, and important technological advances. The papers presented will address topics including advances in display technologies; cinema processing and projection technology; wider color and dynamic range; compression; content management and storage, restoration, and preservation; content security; virtual, augmented, and mixed reality (VR, AR, and MR); media infrastructure (SMPTE ST 2110) and distribution; image acquisition and processing; new techniques in audio; quality assurance and monitoring; workflow systems management; cloud technologies; and encouraging diversity in technology.

The emerging SMPTE ST 2110 suite of standards for professional media over IP (internet protocol) networks will be a hot topic during SMPTE 2017, and Leigh Whitcomb of Imagine Communications will present a paper titled “Is SMPTE ST 2110 the New Standards Superpower?” as part of the Media Infrastructure session. This and other session presentations will delve into the standard, implementation of IP for media production and distribution, and techniques used to optimize performance.

Among the presentations in the Advances in Display Technology session, “Engineering a Live UHD Program from the International Space Station” will feature Rodney P. Grubbs of NASA’s Marshall Space Flight Center and Sandy George of Science Applications International Corporation (SAIC), who will describe how they overcame engineering challenges involved with broadcasting live content in UHD from the International Space Station, as well as the ways commercial technologies are leveraged for in-orbit use.

Callum Hughes of Amazon Studios will present during the Content Security session, describing an approach to security within a digital asset management (DAM) system. During the Stream Privacy session, Raj Nair of Ericsson will discuss mechanisms for guaranteeing stream privacy for both OTT and live/linear adaptive-bit-rate (ABR) workflows.

The Advances in Immersive Storytelling session will feature “360-Degree Video Streaming and Its Subjective Quality,” a paper presentation by Igor Curcio and Henri Toukomaa of Nokia, and a case study by Éric Minoli and Kuban Altan, respectively from Canadian companies Groupe Média TFO and Zero Density, about bridging the gaming and broadcast industries for high-productivity production. The session focusing on new technologies and techniques will include “How Artificial Intelligence and Machine Learning Will Change Content Creation Methodologies,” by Tom Ohanian of TAO Associates.

Sessions on workflow systems will include “IMF End-to-End Workflows in Media Asset Management Systems,” presented by Julian Fernandez of Tedial, as well as “Applying an Agile Approach to Next-Generation Media Management,” presented by Arvato’s Ben Davenport and Christian Siegert. Moving into cloud-oriented workflows, Avid’s Shailendra Mathur will present “Media Cloud Migration Patterns: Connecting Services Between Bare Metal, Virtual Machines, and Containers.” Richard Cartwright of Streampunk Media will present his paper on “An Internet of Things Architecture for Cloud-Fit Professional Media Workflow.”

Speaking within the session on compression, RealNetworks’ Reza Rassool will present a paper titled “VMAF Reproducibility: Validating a Perceptual Practical Quality Metric for 4K Video.” Subhabrata Bhattacharya and Adithya Prakash of Netflix will look at quality from another perspective, presenting “Towards Scalable Automated Analysis of Digital Video Assets for Content Quality Control Applications” within the Quality and Monitoring of Images and Sound session.

The SMPTE 2017 session on UHD acquisition and processing will feature a presentation by YunHyoung Kim of the Korean Broadcasting System (KBS), whose paper describes the world’s first implementation of the Internet Media Subtitles and Captions 1.0 (IMSC1) closed-captioning system — on which ATSC 3.0 is based — on terrestrial UHD TV. The BBC’s Simon Thompson will present “Access Services for UHDTV: An Initial Investigation of W3C TTML2 Subtitles (Closed Captions).” Also in the UHD session, Pierre Hugues Routhier of Canada’s Creat3 inc. will present “Beyond 4K: Can We Actually Tell Stories in Motion Pictures and TV in 8K? A Cinematography Perspective.”

The Cinema Processing and Projection Technology session will include a presentation by Tim Ryan of Texas Instruments, who will explore techniques for using and optimizing variable-frame-rate display for cinematic presentations. A presentation by Kyunghan Lee of KAI Inc. will describe a new VR-based multiscreen movie theater simulator that enables researchers and multiscreen producers to provide a testing platform for multiscreen content and the viewing environment.

The Emerging Research in Visual Perception session will feature Elizabeth Pieri and Jaclyn Pytlarz of Dolby Laboratories, presenting “Hitting the Mark — A New Color Difference Metric for HDR and WCG Imagery,” and Elizabeth DoVale also of Dolby Laboratories and a recipient of the 2016 SMPTE Louis F. Wolf Jr. Memorial Scholarship, presenting “Assessing Psychophysics Functions for Framerate Perception.” Martyn Gates of Ravensbourne and Pure & Applied Image Recognition Limited will present “Is Seeing Still Believing: A Critical Review of the Factors That Allow Humans and Machines to Discriminate Between Real and Generated Images,” a paper exploring the implications as CGI (photo-realistic moving images) increasingly becomes indistinguishable from actual pictures.

During the Content Management, Value Proposition, and Archiving session, Oracle Digital Media Solutions’ Brian Campanotti will present “SMPTE and ISO: Standards to Protect the World’s Most Valuable Assets,” a paper that delves into the inception, development, advancement, and deployment of the Archive eXchange Format (AXF). In the Next Generation TV session, a paper presentation by Alex Giladi of Comcast will discuss adaptive streaming of content that is produced using capped variable-bit-rate encoding.

The session titled “Innovating People: Managing, Mentoring, and Change” will be chaired by Loren Nielsen of Entertainment Technology Consultants and Kari Grubin of Walt Disney Studios, and will feature a discussion of mentoring and reverse-mentoring between baby boomer and millennial tech professionals. Kylee Peña of Bling Digital and Blue Collar Post Collective and Meaghan Wilbur of IncitefulMedia will discuss why diversity programs fail and how to fix them. John McCoskey of Eagle Hill Consulting — and former EVP and CTO at the Motion Picture Association of America (MPAA) — will present his paper, “A Formal Approach to Change Management for Dynamic Technology-Driven Media Organizations.”

  • Thursday, Aug. 10, 2017
Facebook envisions Watch feature as TV for social media
This image provided by Facebook shows a screenshot demonstrating Facebook's new Watch feature, which is dedicated to live and recorded video. The idea is to have fans commenting and interacting with the videos. The new Watch section is a potential threat to Twitter, YouTube, Netflix and other services for watching video. (Courtesy of Facebook via AP)
NEW YORK (AP) -- 

Facebook envisions its new Watch feature as TV designed for social media, a place where users comment, like and interact with show creators, stars and each other — and never leave.

It's a potential threat to Twitter, YouTube, Netflix and other services for watching video, including old-fashioned TV. Yet its success is far from guaranteed.

While people watch a lot of videos on Facebook, these are mostly shared by their friends, seen as users scroll down their main news feed.

Getting people to see Facebook as a video service is like Walmart trying to sell high fashion, or McDonald's peddling high-end food, said Joel Espelien, senior analyst with The Diffusion Group, a video research firm.

Sure, it's possible, but something is off.

"It's very difficult to change people's core perception of what your brand is," he said.

Facebook has already had a special video section, but it mainly shows a random concoction of "suggested" videos. The new Watch section replaces this. Some U.S. users got Watch on Thursday; others will get it over time.

The idea behind Watch is to let people find videos and series they like, keep up with them as new episodes appear, and interact with the show's stars, creators and other fans. People's own tastes, as well as those of their friends, will be used to recommend videos.

Daniel Danker, a product director for video at Facebook, said the most successful shows will be the ones that get people interacting with each other. "Live does that better than almost anything," he said.

Facebook wants to feature a broad range of shows on Watch, including some exclusive to Facebook. Users who already follow certain outlets, say, BuzzFeed, will get recommended shows from those pages.

But Espelien wonders whether Facebook users will tap (or click) the Watch tab when with another tap of the finger they can "click over to Hulu or Netflix or whatever."

Though Facebook might want you to think otherwise, Espelien said there's no boundary keeping you from straying.

Advertising details are still being hashed out, but typically the shows will have five to 15-second ad breaks. Facebook said show creators will decide where the ads go, so they can be inserted during natural breaks.

But it might be a tough sell for advertisers used to a predictable, reliable audience that television has had, Forrester Research analyst Jim Nail said in an email. Facebook's big challenge, he said, will be to train users "to establish a Watch habit."

  • Tuesday, Aug. 8, 2017
Meredith Corp. to standardize stations on Avid’s MediaCentral Platform
BURLINGTON, Mass. -- 

U.S. media group Meredith Corporation has chosen to standardize its workflow on Avid’s MediaCentral® Platform. Over a six-year period, Avid will upgrade 10 stations, install new Avid workflows at two additional stations, and enable Meredith to migrate to a virtualized environment, reducing costs and boosting efficiency while also benefiting from the advantage of adopting a common platform across the enterprise.
 
Meredith’s Local Media Group includes 17 owned or operated television stations reaching 11 percent of U.S. households. Meredith’s portfolio is concentrated in large, fast-growing markets, with seven stations in the nation’s Top 25--including Atlanta, Phoenix, St. Louis and Portland--and 13 in Top 50 markets. Its stations produce 700 hours of local news and entertainment every week, delivering 24/7 news coverage on digital, mobile and broadcast platforms in large, high-growth markets. Faced with the pressures of operating in a digital environment, Meredith needed to upgrade its aging infrastructure and reduce expenditures. A mix of disparate news production equipment at different stations made technology upgrades, support, training and planning complicated and expensive.
 
Meredith’s enterprise-wide adoption of Avid’s MediaCentral Platform will help the media company overcome these challenges. With a single platform across the enterprise and planned upgrades every two years, Meredith’s stations will benefit from advanced tools and workflows for enterprise-wide search and content sharing, and for embracing social media. 
 
“Avid is a leader in the broadcast news industry and has been a trusted partner for many years,” said Larry Oaks, VP of Technology at Meredith. “By standardizing on Avid’s platform, we have a one-stop shop for all our technology, support and training needs across our newsrooms, which will enable us to reduce costs, save a great deal of time and effort, and give us the tools we need to succeed in today’s digital environment.”
 
Meredith’s new workflow comprises Avid’s comprehensive tools and workflow solutions to create, deliver and optimize media, including Avid NEXIS®, the media industry’s first and only software-defined storage platform, MediaCentral | UX, the cloud-based, web front end for the MediaCentral Platform, Avid Interplay® | Production for asset management, and Avid iNEWS® and iNEWS | Command for newsroom management. Meredith will use Media | Distribute to deliver content to social media channels, as well as Media Composer® | Cloud Remote and Media Composer | NewsCutter® Option for nonlinear editing, and Avid AirSpeed® video servers. Avid Professional Services will provide installation, support and customized enterprise-wide training.
 
“Meredith is the latest member of Avid’s growing community of preeminent customers to adopt an enterprise-wide single platform approach,” said Jeff Rosica, president at Avid. “With Avid’s flexible commercial options and deployment models, Meredith can keep its stations and staff at the forefront of technology, virtualize its infrastructure, and respond quickly to new challenges and opportunities--all while reducing costs.” 

  • Saturday, Aug. 5, 2017
Academy Investigates 11 scientific & technical areas for 2017 Oscars
LOS ANGELES -- 

The Academy of Motion Picture Arts and Sciences has announced that 11 distinct scientific and technical investigations have been launched for the 2017 Oscars®.

These investigations are made public so individuals and companies with devices or claims of innovation within these areas will have the opportunity to submit achievements for review.

The deadline to submit additional entries is Tuesday, August 15, at 5 p.m. PT.  The Academy’s Scientific and Technical Awards Committee has started investigations into the following areas:

  • Systems using multiple, stabilized, synced cameras to capture background footage, with integrated playback for simulating movement in static vehicles
  • Submersible, telescoping camera cranes
  • Automated systems for cinema auditorium quality control
  • Systems for onset digital dailies with color managed workflows
  • Systems for onboard RAW recording for digital cinema cameras
  • Gyroscopically stabilized camera platforms for aerial cinematography
  • Systems for modular character rigging enabling large scale, complex, high quality 3D digital character animation
  • Systems for digital storyboarding and story reel development
  • Efficient systems for interactive animation of large numbers of high-resolution 3D characters with full surface detail
  • Single surface audio platforms for automated dialogue replacement (ADR).
  • Software applications to synthesize complex sound scenes from a limited set of source elements

Claims of prior art or similar technology must be submitted online here.  

After thorough investigations are conducted in each of the technology categories, the committee will meet in November to vote on recommendations to the Academy’s Board of Governors, which will make the final awards decisions.

The 2017 Scientific and Technical Awards Presentation will be held on Saturday, February 10, 2018.

The 90th Oscars will be held on Sunday, March 4, 2018, at the Dolby Theatre® at Hollywood & Highland Center® in Hollywood, and will be televised live on the ABC Television Network at 7 p.m. ET/4 p.m. PT.  The Oscars also will be televised live in more than 225 countries and territories worldwide.

 

  • Wednesday, Aug. 2, 2017
RED RAVEN Camera Kit available via Apple.com
The RED RAVEN Camera Kit
IRVINE, Calif. -- 

RED Digital Cinema has announced that its RED RAVEN Camera Kit is now available exclusively through Apple.com and available to demo at select Apple Retail Stores. This complete handheld camera package features a diverse assortment of components from some of the industry’s top brands, including:

·      RED RAVEN 4.5K camera BRAIN

·      RED DSMC2 Touch LCD 4.7” Monitor 

·      RED DSMC2 Outrigger Handle

·      RED V-Lock I/O Expander

·      RED 120 GB RED MINI-MAG

·      Two IDX DUO-C98 batteries with VL-2X charger

·      G-Technology ev Series RED MINI-MAG Reader

·      Sigma 18-35mm F1.8 DC HSM | Art

·      Nanuk heavy-duty camera case

·      Final Cut Pro X

·      foolcontrol iOS app for RAVEN Camera Kit

 

The RED RAVEN Camera Kit is available for $14,999.95. Customers can buy this package or learn more at Apple.com and select Apple Retail Stores.

“We are very excited to work with Apple on the launch of the RED RAVEN Camera Kit, available exclusively through Apple.com,” said Jarred Land, president of RED Digital Cinema. “The RED RAVEN Camera Kit is a ready-to-shoot professional package that gives content creators everything they need to capture their vision with RED’s superior image capture technology.”

The RAVEN 4.5K is RED’s most compact camera BRAIN, weighing in at just 3.5 lbs. This makes it a great choice for a range of applications including documentaries, online content creation, indie filmmaking, and use with drones or gimbals. The RAVEN is equipped with a 4.5K RED DRAGON sensor, and is capable of recording REDCODE RAW (R3D) in 4.5K at up to 120 fps and in 2K at up to 240 fps. RED RAVEN additionally offers incredible dynamic range, RED’s renowned color science, and is capable of recording REDCODE RAW and Apple ProRes simultaneously—ensuring shooters get the best image quality possible in any format.

The RED RAVEN Camera Kit also includes Final Cut Pro X which features native support for REDCODE RAW video, built-in REDCODE RAW image controls, and the most complete ProRes support of any video editing software. Together with the free RED Apple Workflow software, Final Cut Pro allows professional video editors to work quickly and easily with RED RAVEN footage on MacBook Pro, iMac, and Mac Pro systems.

  • Friday, Jul. 28, 2017
Faceware Technologies announces Faceware LiveSDK
LOS ANGELES -- 

Faceware Technologies, provider of markerless 3D facial motion capture solutions, has announced an SDK for its real-time facial mocap and animation technology, Faceware Live. The Windows Native C++ SDK, will enable developers and creatives to build their own real-time, interactive applications. SDK users can allow live player-to-player chat in games, live interactive displays and activations, and even integrate the SDK into their own production tools and processes. Faceware will be speaking about the capabilities of the SDK at SIGGRAPH 2017 (Booth 741) from Aug 1-3.

“With the rise in VR/AR/MR, interactive marketing, and the use of CG, we’re seeing a growing number of inquiries from many different markets,” said Peter Busch, vice president of business development at Faceware Technologies. “Rather than addressing each and every request, we’ve created a SDK to enable developers to develop the tools they need to meet their own needs. We’ve got some amazing use cases I can’t wait to talk about.”

Features of the new SDK include:

  • Windows Native C++ 
  • High-frame-rate tracking, with no visible latency
  • Over 100 APIs developers can use to track and animate faces in real time
  • Create facial animation in real time from a person’s face on video
  • Tracks 82 landmarks on the face and streams over 40 animation controls
  • One second camera-to-face calibration
  • SDK can track facial movement from a live camera feed, a video file (e.g .mov file), or an image sequence (e.g. .jpg) 
  • Works with almost any camera or webcam, including head-mounted cameras
  • Easy to adjust camera settings for optimizing the user experience
  • Tools to multiply and adjust animation output values to match your characters
  • Simulate animation output for easy debugging and testing your character animation before use

“We’re really excited to put our real time facial tracking technology directly into the hands of developers,” said Jay Grenier, director of software and technology at Faceware. “Faceware Live has or is being used for a number of real-time applications, such as Hasbro’s live-streamed social media announcement for Monopoly and the recent Macinness-Scott installation at Sotheby’s ‘Art of VR’ event in New York. And now, with Faceware LiveSDK, the community is about to get a fantastic new tool to develop their own amazing applications.”

  • Wednesday, Jul. 26, 2017
Mark Zuckerberg, Elon Musk spar over the rise of AI
This combo of file images shows Facebook CEO Mark Zuckerberg, left, and Tesla and SpaceX CEO Elon Musk. (AP Photo/Manu Fernandez, Stephan Savoia)
SAN FRANCISCO (AP) -- 

Tech titans Mark Zuckerberg and Elon Musk recently slugged it out online over the possible threat artificial intelligence might one day pose to the human race, although you could be forgiven if you don't see why this seems like a pressing question.

Thanks to AI, computers are learning to do a variety of tasks that have long eluded them — everything from driving cars to detecting cancerous skin lesions to writing news stories . But Musk, the founder of Tesla Motors and SpaceX, worries that AI systems could soon surpass humans, potentially leading to our deliberate (or inadvertent) extinction.

Two weeks ago, Musk warned U.S. governors to get educated and start considering ways to regulate AI in order to ward off the threat. "Once there is awareness, people will be extremely afraid," he said at the time.

Zuckerberg, the founder and CEO of Facebook, took exception. In a Facebook Live feed recorded Saturday in front of his barbecue smoker, Zuckerberg hit back at Musk, saying people who "drum up these doomsday scenarios" are "pretty irresponsible." On Tuesday, Musk slammed back on Twitter , writing that "I've talked to Mark about this. His understanding of the subject is limited."

Here's a look at what's behind this high-tech flare-up — and what you should and shouldn't be worried about.

WHAT IS AI, ANYWAY?
Back in 1956, scholars gathered at Dartmouth College to begin considering how to build computers that could improve themselves and take on problems that only humans could handle . That's still a workable definition of artificial intelligence.

An initial burst of enthusiasm at the time, however, devolved into an "AI winter" lasting many decades as early efforts largely failed to create machines that could think and learn — or even listen, see or speak.

That started changing five years ago. In 2012, a team led by Geoffrey Hinton at the University of Toronto proved that a system using a brain-like neural network could "learn" to recognize images. That same year, a team at Google led by Andrew Ng taught a computer system to recognize cats in YouTube videos — without ever being taught what a cat was.

Since then, computers have made enormous strides in vision, speech and complex game analysis. One AI system recently beat the world's top player of the ancient board game Go.

HERE COMES TERMINATOR'S SKYNET ... MAYBE
For a computer to become a "general purpose" AI system, it would need to do more than just one simple task like drive, pick up objects, or predict crop yields. Those are the sorts of tasks to which AI systems are largely limited today.

But they might not be hobbled for too long. According to Stuart Russell, a computer scientist at the University of California at Berkeley, AI systems may reach a turning point when they gain the ability to understand language at the level of a college student. That, he said, is "pretty likely to happen within the next decade."

While that on its own won't produce a robot overlord, it does mean that AI systems could read "everything the human race has ever written in every language," Russell said. That alone would provide them with far more knowledge than any individual human.

The question then is what happens next. One set of futurists believe that such machines could continue learning and expanding their power at an exponential rate, far outstripping humanity in short order. Some dub that potential event a "singularity," a term connoting change far beyond the ability of humans to grasp.

NEAR-TERM CONCERNS
No one knows if the singularity is simply science fiction or not. In the meantime, however, the rise of AI offers plenty of other issues to deal with.

AI-driven automation is leading to a resurgence of U.S. manufacturing — but not manufacturing jobs . Self-driving vehicles being tested now could ultimately displace many of the almost 4 million professional truck, bus and cab drivers now working in the U.S.

Human biases can also creep into AI systems. A chatbot released by Microsoft called Tay began tweeting offensive and racist remarks after online trolls baited it with what the company called "inappropriate" comments.

Harvard University professor Latanya Sweeney found that searching in Google for names associated with black people more often brought up ads suggesting a criminal arrest. Examples of image-recognition bias abound.

"AI is being created by a very elite few, and they have a particular way of thinking that's not necessarily reflective of society as a whole," says Mariya Yao, chief technology officer of AI consultancy TopBots.

MITIGATING HARM FROM AI
In his speech to the governors, Musk urged governors to be proactive, rather than reactive, in regulating AI, although he didn't offer many specifics. And when a conservative Republican governor challenged him on the value of regulation, Musk retreated and said he was mostly asking for government to gain more "insight" into potential issues presented by AI.

Of course, the prosaic use of AI will almost certainly challenge existing legal norms and regulations. When a self-driving car causes a fatal accident, or an AI-driven medical system provides an incorrect medical diagnosis, society will need rules in place for determining legal responsibility and liability.

With such immediate challenges ahead, worrying about superintelligent computers "would be a tragic waste of time," said Andrew Moore, dean of the computer science school at Carnegie Mellon University.

That's because machines aren't now capable of thinking out of the box in ways they weren't programmed for, he said. "That is something which no one in the field of AI has got any idea about."
 

  • Tuesday, Jul. 25, 2017
Foundry launches Nuke and Hiero 11.0
Timeline Disk Cache in Nuke Studio: Nuke Studio now has new GPU accelerated disk caching that allows users to cache part or all of a sequence to disk for smoother playback of more complex sequences.
LONDON -- 

Creative software developer Foundry has launched Nuke and Hiero 11.0, the next major release for the Nuke family of products including Nuke, NukeX, Nuke Studio, Hiero and HieroPlayer.
 
As a leading high-end compositing tool, Nuke and Hiero 11.0 align with industry standards and introduce a host of features and updates that will boost artist performance and increase collaboration. 
 
Following its successful beta launch in April 2017, Nuke and Hiero 11.0 will redefine how teams collaborate, helping them to get the highest quality results, faster.
 
Key features for this release include:

  • VFX Reference Platform 2017: The Nuke family is being updated to VFX Platform 2017, which includes several major updates to key libraries used within Nuke, including Python, Pyside and Qt. 
  • Live Groups: Introduces a new type of group node which offers a powerful new collaborative workflow for sharing work among artists. Live Groups referenced in other scripts automatically update when a script is loaded, without the need to render intermediate stages. 
  • Frame Server in Nuke and NukeX: Nuke Studio’s intelligent background rendering is now available in Nuke and NukeX. The Frame Server takes advantage of available resource on your local machine, enabling you to continue working while rendering is happening in the background.
  • New Lens Distortion in NukeX: The LensDistortion node has been completely revamped, with added support for fisheye and wide-angle lenses and the ability to use multiple frames to produce better results. It is now also GPU-enabled. 
  • Timeline Disk Cache in Nuke Studio: Nuke Studio now has new GPU accelerated disk caching that allows users to cache part or all of a sequence to disk for smoother playback of more complex sequences.

Jody Madden, chief product and customer officer at Foundry, commented: “We’re delighted to announce the release of Nuke and Hiero 11.0 with new workflows for artist collaboration and a renewed focus on industry standards.  Nuke, NukeX and Nuke Studio continue to be the go-to industry tools for compositing, editorial and review tasks, and we’re confident these updates will continue to provide performance improvements and further increase artist efficiency.”
 
Nuke and Hiero 11.0 have gone live and will be available for purchase on Foundry’s website and via accredited resellers.