Tuesday, August 22, 2017

Toolbox

  • Monday, Aug. 21, 2017
Sony FS5 deployed for an inspiring peak performance in "True North"
"True North"

The Workshop’s True North is an inspiring documentary about Sean Swarner, who in 2002 became the first cancer survivor to reach the top of Mount Everest.  He’s kept climbing ever since spreading his encouraging message of hope.  In a span of five years, the two-time cancer survivor with only one functioning lung scaled the Seven Summits, the highest mountains in each of the seven continents. 
 
Swarner’s journey started at the age of 13 when he was diagnosed with Stage 4 Hodgkin’s lymphoma and given two months to live.  He beat the odds and went into remission 10 months later, only to be diagnosed with Askin’s Sarcoma two years later and told he had two weeks to live. Once again, Swarner overcame adversity and resisted expectations to continue sharing his infectious spirit.
 
But defying the odds and topping the Seven Summits wasn’t enough to satisfy Swarner.  Two years ago, he set out to accomplish another goal by conquering the North and South Poles to complete The Explorer’s Grand Slam.  In 2015, Swarner checked the South Pole off his list and in April 2017, after summiting the North Pole bearing a flag with the names of over 2,000 people affected by cancer, he became an official member of this exclusive club. 
 
True North’s Tom Caamano, director/producer, and Igor Kropotov, DP, chose Sony’s FS5 as their primary camera to document this unimaginable achievement.  During the nine-day journey, which saw temperatures of -40 degrees Celsius, the two asserted “the Sony FS5 was definitely the right camera for this shoot.”  In addition, the team summiting the North Pole, which included Kropotov and second camera operator Corbin Johnson, used two Sony α7S IIs for stills and support. They paired all the cameras with Sony’s complementary 18-105mm zoom lens for longer telephoto shots and brought an 11-18mm lens for an extremely wide angle.
 
Kropotov, who had previously used Sony’s cameras including the FS5, is no stranger to shooting in remote and cold climates including Nepal and Siberia.  It is no wonder he “jumped at the once in a lifetime opportunity to go to the North Pole and be a part of an amazing documentary centered on Sean, an inspirational and strong-willed person.” 
 
The crew kept their equipment list limited, bringing only Sony’s FS5, two α7S IIs, 10 additional batteries, two lenses and more than a dozen 128GB Sony SD cards for this journey.  ”We had to be very conscious of our camera’s size and weight, since it had to easily fit in our sled and we had weight restrictions when flying in the helicopter,” Kropotov said.  “Alternately, I’d strap it around my neck and it wouldn’t weight me down.  We appreciated that the camera doesn’t require a lot of accessories to make it operational.  The FS5’s small and compact size, coupled with the quality of the image made it the best tool for the job.” 
 
The team opted to shoot HD (1080x720) to ensure they had enough media and batteries to stay up and running for nine days, and to stay “self-sufficient without the need to carry hard drives or laptops, which allowed us to rely solely on SD cards.” 
 
Kropotov praised the FS5’s slow motion capabilities, saying it added a “meditative element” to the documentary.  He described how he used the camera to capture a classic shot of a mug of hot water being tossed into the air and immediately turning into ice droplets.  He felt slow motion helped to illustrate the “emotional weight of traveling to the North Pole.”
 
Kropotov also appreciated the camera’s distinctive neutral density filters. “Because there’s 24 hour sunlight on the North Pole and the reflective white snow is so bright,” he said, “the ISO was generally pretty low and we used the ND filters to give us that depth of field and it worked extremely well.”
 
As with any shoot in punishing environments, the crew was up against a lot of unknowns.  A challenge which the FS5 was up for was producing a natural and realistic image despite the flat terrain and persistent landscape of blinding white snow, in addition to the 24-hour sunlight. 
 
Kropotov did experience some issues due to the intense temperatures and likened carrying around a camera in the extreme cold to storing it in a freezer and immediately shooting with it upon removal – only way colder.  Other concerns included a lag in the image and the ability for the LCD monitor’s liquid crystals to freeze in the bitter cold. 
 
He also quickly learned how to keep the extra batteries warm and fully functional by storing them in his sleeping bag overnight and chest pocket during the day, using his body heat to keep them operational; “otherwise they’d wind up being ice cubes by the end of the day,” he said. The crew also had to contend with condensation, which froze immediately to their faces, eyelashes and even the camera, causing the buttons and lenses to fog if not handled properly. 
 
Kropotov also learned the importance of preparing the camera to his desired settings in advance of leaving the tent, since “there’s no tinkering or changing settings possible, and if something falls off the camera, you have to wait until the end of the day when you’re inside the frigid tent to fix it.  Everything has to be on point in terms of where it’s located and stationed and how the settings and switches work with ISO and light balance.  Because of the constant sunlight and temperature, we didn’t want or need to significantly manipulate these settings over the course of the day.”
 
Despite some minor cracking and freezing, Kropotov asserted, “The FS5 held up really well and if I were to do it all over again, I would most certainly take the FS5 with me again – it was great.”
 
Between Sony’s 18-105mm lens and the 11-18mm lens the team brought, they were able to capture a variety of scenarios and vantage points using just two lenses.  Kropotov explained, “Sometimes it’s not ideal to switch lenses in the field, so our choices had to be varied and flexible. There’s a lot of talk about the condensation settling on the glass and the sensor, but that never seemed to be an issue, which contributed to our decision not to switch out lenses very frequently in the field.”
 
He continued, “The idea was to use the 11-18 inside the tent to get wider shots and also to fully capture the landscape without veering too far off our path.  We were also able to get wide shots, to show a sense of scale of the Arctic Ocean, and shooting in the helicopter gave us a wide field of view.  One of the film’s concepts is to bring these survivors and people affected by cancer who are named on the flag on this journey with us.  The wide angle shots conceptually capture the constant feeling of movement and allow viewers to follow Sean and have a visual sense of what carries him through this journey.” 
 
Caamano described the team’s intentional form of storytelling. The distance between Sean and the sled he pulls is about four to six feet,” he said.  “We wanted the viewers to experience that distance throughout the entire documentary and feel like they’re in the sled with the team’s supplies and Sean’s symbolic flag.  When Sean’s going to get his check-up or visiting people at hospitals before his trip, we opted to have the camera the same distance behind him and use those same prime lenses to keep the vantage point consistent.  Then, when we made the transition to the North Pole it was a seamless extension of his day-to-day journey.  We like to say that six months beforehand, when Sean got the first signature on his flag, that’s when the trip to the North Pole really began, so we wanted to keep that uniformity in our point of view.”
 
Kropotov spoke about the benefits of using all Sony cameras saying, “We wanted to keep everything in the Sony family because it allowed us to use the same system.  Lenses and other accessories were interchangeable and our videos and stills looked great. If I was shooting with the FS5, I would have an α7S II underneath my layers and ready to swiftly capture a lead.  The Alpha cameras were a key element of our shoot and the ability to quickly draw it out and produce a quality image was great.  Another reason we chose it to be our secondary camera is because it is an integrated system with no cables and less buttons, which means less opportunity for failure.”
 
Caamano described other elements of the documentary, many of which were shot using Sony’s F5 camera and matched perfectly with the FS5 footage.  “Prior to his excursions, Sean met with cancer patients and people who are at the darkest points in their lives,” he said.  “When he meets with them he tells his story of overcoming hardship, and when he leaves, these people are at a different point.  They’re feeling more positive.  They’re feeling more inspired.  And we’re sitting back as a fly on the wall and documenting this amazing story to encourage and motivate others. It’s a privilege for us as filmmakers to witness something so genuine and inspiring and see how many people Sean is affecting.  And we get to put faces to those names on the flag and follow up and see how they’re doing.”
 
In the end, the journey was an overwhelming success and Swarner is now one of few on the elite list of people who have accomplished the Explorer’s Grand Slam, and he has the video diaries to prove it.  Caamano jokes, “I think Sean has run out of places to go, so the next journey may have to be into space!”
 
True North is now in postproduction and will air nationally on American Public Television in the fall. 

  • Monday, Aug. 21, 2017
ASC Technology Committee upgraded to Motion Imaging Technology Council
Curtis Clark, ASC accepts the Sci-Tech Award for the ASC CDL in February 2013, along with (l-r) Joshua Pines, David Reisner, David Register, and Lou Levinson (not pictured). (Credit: Aaron Poole/©A.M.P.A.S.)
LOS ANGELES -- 

The ASC Technology Committee has been renamed the ASC Motion Imaging Technology Council. Established in 2003, the Committee has helped organize efforts to study and assess subjects ranging from digital cameras and lens optics to motion imaging workflows, advanced color management, virtual production techniques digital archiving and more recently virtual reality.

“During our past 14 years of proactive motion picture and TV industry engagement, the ASC Technology Committee has played a significant leadership role in guiding the evolution and development of key motion imaging technologies to better support our filmmaking art form,” noted Chairman Curtis Clark, ASC.

“Many of our industry partners and supporters, along with users of our technologies, have suggested that the Committee’s name does not sufficiently convey the scope and influence that our activities have had on important motion imaging technology developments,” he continues. “In response to that input and after careful consideration, we have decided to change the Committee’s name to the ASC Motion Imaging Technology Council (MITC) — or ‘My Tech.’ We believe this better represents the expanded scope of the work we are doing and our widely recognized role as industry leaders — influencing the advancement of motion imaging technologies in ways that best serve the creative interests of filmmakers while emphasizing the cinematographer’s contribution to the art form.”

Clark added, “Our Subcommittees will now be designated Committees of the ASC Motion Imaging Technology Council. We will continue to encourage our Committees to work in a coordinated manner, combining their expertise on topics of wide interest and concern, including ACES, HDR, digital motion picture camera developments, look management, virtual production techniques, lens developments, DI, motion imaging workflows, projection and display technologies, archiving, as well as advanced imaging.”

MITC’s latest reports on a variety of technological issues will be published on in the September issue of the SMPTE Motion Imaging Journal as part of the 2017 SMPTE Progress Report.

  • Thursday, Aug. 17, 2017
Playbox Technology demos at IBC to feature its CloudAir and Neo platforms
Playbox Technology's CloudAir platform
LONDON -- 

PlayBox Technology will demonstrate complete broadcast playout solutions leveraging its cloud-based CloudAir and server-based Neo platforms at the upcoming IBC-2017 exhibition in Amsterdam from Sept. 15-19. Hybrid configurations combining the strengths of both platforms will also be shown.

“Broadcasters today are demanding speed and flexibility in the way they set up and manage their services,” said Don Ash, president of PlayBox Technology. “Partnership agreements between PlayBox Technology and an increasing number of communication service providers have made CloudAir more accessible than ever to existing and would-be broadcasters throughout the world. CloudAir eliminates the need for channel managers to wait for new technical hardware to be delivered, installed and commissioned. Available on a fast-startup software-as-a service basis, CloudAir forms a basis for highly efficient broadcasting via terrestrial, satellite and dedicated cable wherever and whenever these are the channel management’s preferred delivery media. It makes the process of starting a new channel as simple as making a phone call, either direct to their preferred service provider or via the global network of PlayBox Technology support offices.”

“CloudAir also gives content owners the ability to start purely IPTV-based channels at very short notice, accessible to online viewers in any country.  IPTV channels can be operated to a published schedule or as viewer-specific time-buffered video-on-demand,” added CEO Pavlin Rahnev. “Channel managers can control the whole process of branding and playout via a secure link from a desktop or even a laptop computer. They can upload content via the same link ahead of transmission while retaining the freedom all broadcasters appreciate to add late-breaking stories such as news as additions to the playout schedule. Entire channels can be operated this way without managers needing to own, accommodate and maintain dedicated hardware. We will also be demonstrating the ease with which CloudAir can be integrated with our established Neo server-based product series to form a hybrid of onsite and offsite channel management and playout resources. An increasing number of Neo customers are already seeing the advantages CloudAir offers as a remote disaster-recovery solution and as a medium for single-event OTT or full 24/7 fast-startup television channels.” 

Among new CloudAir features making their IBC debut will be a transcoder capable of handling multiple file wrappers and formats including MPEG PS/TS, MXF, QT, AVI, MP4, GXF, MPG2, H.264, ProRes, DNX HD and MJPEG. Also being introduced to European broadcasters are an enhanced graphics editor template preparation interface, improved playlist editing, advanced playlist export to EPGs and automated linking of stored assets.

A new addition to the Neo platform, Neo TS IP Stream Delay, will make its maiden exhibition appearance. Occupying a standalone 1U chassis, Neo TS IP Stream Delay provides fully transparent delay of IP transport streams such as DVB/ATSC MPEG broadcast-quality compressed video and audio for single or multichannel time zone shift and disaster-recovery applications. Designed for fully automated operation, it can be configured with multiple input channels and multiple delayed outputs. Each input also has one zero-delay output. All operating parameters are easily adjusted via an integral web-based user interface, including channel-specific time delay in 15 second increments. Maximum delay duration depends on input bit rate and storage capacity. Additional features include programme information display of MPEG-compliant transport streams plus automatic error logging.

Over 40 new features for other modules in the Neo series will be introduced at IBC2017. These include the ability to integrate ProductionAirBox Neo closely with the Associated Press ENPS news production system via MOS gateway. Among other additions to the capabilities of PlayBox Neo are extended control features, expanded file handling capabilities, greater input and output connectivity and Microsoft Windows 10 compatibility. These have all been implemented within an informative and intuitive graphic interface which is familiar to operators around the world.

PlayBox Technology Limited is an international communications and information-technology company serving the broadcast and corporate sectors in more than 120 countries. Over 17,000 TV and branding channels are powered by PlayBox Technology Limited broadcast solutions. Users include national and international broadcasters, start-up TV channels, webcasters, DVB (IP/ASI) TV channels, interactive TV and music channels, film channels, remote TV channels and disaster recovery channels.

  • Wednesday, Aug. 16, 2017
Timeline Television to showcase its IP 4K HDR OB truck at IBC
Timeline Television's newest OB truck
NEWBURY, UK -- 

Snell Advanced Media (SAM) announced that Timeline Television’s newest OB truck--the first IP 4K HDR truck in Europe--will be featured on its stand (#9A01) at IBC 2017. The truck, UHD2, seamlessly handles fully uncompressed 4K/UHD, IP and HDR.
 
A state-of-the-art, triple expanding OB truck, UHD2 is home to a range of SAM technology including two Kahuna IP production switchers, IP Multiviewers and with SAM’s IP infrastructure technology providing the backbone. Also in the truck for IBC, SAM’s LiveTouch 4K/UHD replay and highlights system will be used for demonstrations.
 
Timeline’s UHD2 is designed to support 32 Sony 4K cameras. Its two Kahunas enable SDR and HDR to be run simultaneously along with down converted HD outputs. The set-up allows production teams to work in VSF TR03 (SMPTE ST 2110 draft)--the first time this has been done in an OB truck--enabling Timeline to work with video and audio as separate essence flows within an IP workflow.
 
Daniel McDonnell, managing director at Timeline Television, said, “We worked closely with SAM to design a workflow based on the latest IP infrastructure and HDR technology available, providing customers with a highly scalable solution that can meet complex production requirements without the need to add additional OB support. Given the increased number of 4K cameras and replay positions that we wanted to support, IP made perfect sense and SAM’s technology even more so as it afforded us the maximum flexibility and scalability.”
 
Robert Szabó-Rowe, EVP and general manager, live Production and infrastructure, SAM, commented, “We’re really excited to have Timeline’s award winning UHD2 truck on our stand at IBC as it’s a tremendous showcase for our technology and testament to our close partnership with Timeline in delivering true market innovation. The truck offers a great opportunity for visitors to IBC to experience how IP is being used today in a real life scenario.”
 
Timeline Television’s McDonnell will be presenting a detailed case study on UHD2 within the IBC IP Showcase theatre (E106/107).

  • Tuesday, Aug. 15, 2017
Lineup of events, program details unveiled for SMPTE 2017 Annual Technical Conference & Exhibition
SMPTE Education VP Richard Welsh (l) and Pat Griffis, SMPTE EVP, attend the 2016 Annual SMPTE Awards.
WHITE PLAINS, NY -- 

Program details for the SMPTE 2017 Annual Technical Conference & Exhibition (SMPTE 2017), Oct. 24-26 in Hollywood, Calif., have been announced. SMPTE 2017 will fill two exhibit halls and multiple session rooms at the Hollywood & Highland Center, and the event will also feature an Oktoberfest reception, Broadcast Beat’s SMPTE 2017 Live! Studio, and special events culminating with the SMPTE Annual Awards Gala at the Loews Hollywood Hotel’s Hollywood Ballroom on Thursday, Oct. 26.

“We’ve got an incredible lineup of technical sessions scheduled for this year, and we’re rounding out the conference and exhibition with some popular events that were added last year,” said SMPTE Education VP Richard Welsh, CEO of Sundog Media Toolkit. “The timely topics and technologies discussed at SMPTE 2017 are sure to make a splash as the Society dives into its next century of standards development and education.”

SMPTE’s Annual Technical Conference & Exhibition explores media and entertainment technology. The conference and exhibition will follow the daylong SMPTE 2017 Symposium — “Artificial Intelligence (AI) and Machine Learning in Digital Media Creation: The Promise, The Reality, and The (Scary?) Future” — on Oct. 23. The Symposium is co-chaired by SMPTE Fellow Michelle Munson and Yvonne Thomas of Arvato Systems. Further details about the Symposium will soon be available. Events on Oct. 23 also will include the annual Women in Technology Luncheon, presented by SMPTE and Hollywood Professional Association (HPA) Women in Post, and the SMPTE-HPA Student Film Festival, which will highlight the creative use of technology to support the art and craft of storytelling. Tickets for the luncheon and festival are available separately or as add-ons to a SMPTE 2017 conference registration.

The SMPTE 2017 Technical Conference program committee is co-chaired by three SMPTE Fellows: Paul Chapman, senior vice president of technology at Fotokem; Thomas Edwards, vice president engineering and development at Fox; and SMPTE Education director Sara J. Kudrle, product marketing manager for playout at Imagine Communications. SMPTE 2017 itself will include the usual wealth of technical sessions, along with an array of special events that offer numerous opportunities for face-to-face interaction between attendees, exhibitors, and speakers.

The first day of the technical conference will feature special events, including the Fellows Luncheon, open exclusively to SMPTE Fellows and Life Fellows who have registered for the event, as well as the SMPTE Annual General Membership Meeting and Oktoberfest Reception, both open to all attendees with conference registration. On the second day, the Evening Reception will take place in the Ray Dolby Exhibit Hall. The SMPTE 2017 Annual Awards Gala on the third and final day of the conference will welcome registered guests on the red carpet and treat them to a reception and dinner honoring industry leaders. SMPTE 2017 will conclude with the Awards After-Party featuring the SMPTE Jam, which once again will feature a pickup band comprising a diverse group of SMPTE members playing popular hits — and possibly a few original pieces created for the occasion.

Technical conference sessions throughout all three days of SMPTE 2017 will delve into the industry’s most innovative, intriguing, and important technological advances. The papers presented will address topics including advances in display technologies; cinema processing and projection technology; wider color and dynamic range; compression; content management and storage, restoration, and preservation; content security; virtual, augmented, and mixed reality (VR, AR, and MR); media infrastructure (SMPTE ST 2110) and distribution; image acquisition and processing; new techniques in audio; quality assurance and monitoring; workflow systems management; cloud technologies; and encouraging diversity in technology.

The emerging SMPTE ST 2110 suite of standards for professional media over IP (internet protocol) networks will be a hot topic during SMPTE 2017, and Leigh Whitcomb of Imagine Communications will present a paper titled “Is SMPTE ST 2110 the New Standards Superpower?” as part of the Media Infrastructure session. This and other session presentations will delve into the standard, implementation of IP for media production and distribution, and techniques used to optimize performance.

Among the presentations in the Advances in Display Technology session, “Engineering a Live UHD Program from the International Space Station” will feature Rodney P. Grubbs of NASA’s Marshall Space Flight Center and Sandy George of Science Applications International Corporation (SAIC), who will describe how they overcame engineering challenges involved with broadcasting live content in UHD from the International Space Station, as well as the ways commercial technologies are leveraged for in-orbit use.

Callum Hughes of Amazon Studios will present during the Content Security session, describing an approach to security within a digital asset management (DAM) system. During the Stream Privacy session, Raj Nair of Ericsson will discuss mechanisms for guaranteeing stream privacy for both OTT and live/linear adaptive-bit-rate (ABR) workflows.

The Advances in Immersive Storytelling session will feature “360-Degree Video Streaming and Its Subjective Quality,” a paper presentation by Igor Curcio and Henri Toukomaa of Nokia, and a case study by Éric Minoli and Kuban Altan, respectively from Canadian companies Groupe Média TFO and Zero Density, about bridging the gaming and broadcast industries for high-productivity production. The session focusing on new technologies and techniques will include “How Artificial Intelligence and Machine Learning Will Change Content Creation Methodologies,” by Tom Ohanian of TAO Associates.

Sessions on workflow systems will include “IMF End-to-End Workflows in Media Asset Management Systems,” presented by Julian Fernandez of Tedial, as well as “Applying an Agile Approach to Next-Generation Media Management,” presented by Arvato’s Ben Davenport and Christian Siegert. Moving into cloud-oriented workflows, Avid’s Shailendra Mathur will present “Media Cloud Migration Patterns: Connecting Services Between Bare Metal, Virtual Machines, and Containers.” Richard Cartwright of Streampunk Media will present his paper on “An Internet of Things Architecture for Cloud-Fit Professional Media Workflow.”

Speaking within the session on compression, RealNetworks’ Reza Rassool will present a paper titled “VMAF Reproducibility: Validating a Perceptual Practical Quality Metric for 4K Video.” Subhabrata Bhattacharya and Adithya Prakash of Netflix will look at quality from another perspective, presenting “Towards Scalable Automated Analysis of Digital Video Assets for Content Quality Control Applications” within the Quality and Monitoring of Images and Sound session.

The SMPTE 2017 session on UHD acquisition and processing will feature a presentation by YunHyoung Kim of the Korean Broadcasting System (KBS), whose paper describes the world’s first implementation of the Internet Media Subtitles and Captions 1.0 (IMSC1) closed-captioning system — on which ATSC 3.0 is based — on terrestrial UHD TV. The BBC’s Simon Thompson will present “Access Services for UHDTV: An Initial Investigation of W3C TTML2 Subtitles (Closed Captions).” Also in the UHD session, Pierre Hugues Routhier of Canada’s Creat3 inc. will present “Beyond 4K: Can We Actually Tell Stories in Motion Pictures and TV in 8K? A Cinematography Perspective.”

The Cinema Processing and Projection Technology session will include a presentation by Tim Ryan of Texas Instruments, who will explore techniques for using and optimizing variable-frame-rate display for cinematic presentations. A presentation by Kyunghan Lee of KAI Inc. will describe a new VR-based multiscreen movie theater simulator that enables researchers and multiscreen producers to provide a testing platform for multiscreen content and the viewing environment.

The Emerging Research in Visual Perception session will feature Elizabeth Pieri and Jaclyn Pytlarz of Dolby Laboratories, presenting “Hitting the Mark — A New Color Difference Metric for HDR and WCG Imagery,” and Elizabeth DoVale also of Dolby Laboratories and a recipient of the 2016 SMPTE Louis F. Wolf Jr. Memorial Scholarship, presenting “Assessing Psychophysics Functions for Framerate Perception.” Martyn Gates of Ravensbourne and Pure & Applied Image Recognition Limited will present “Is Seeing Still Believing: A Critical Review of the Factors That Allow Humans and Machines to Discriminate Between Real and Generated Images,” a paper exploring the implications as CGI (photo-realistic moving images) increasingly becomes indistinguishable from actual pictures.

During the Content Management, Value Proposition, and Archiving session, Oracle Digital Media Solutions’ Brian Campanotti will present “SMPTE and ISO: Standards to Protect the World’s Most Valuable Assets,” a paper that delves into the inception, development, advancement, and deployment of the Archive eXchange Format (AXF). In the Next Generation TV session, a paper presentation by Alex Giladi of Comcast will discuss adaptive streaming of content that is produced using capped variable-bit-rate encoding.

The session titled “Innovating People: Managing, Mentoring, and Change” will be chaired by Loren Nielsen of Entertainment Technology Consultants and Kari Grubin of Walt Disney Studios, and will feature a discussion of mentoring and reverse-mentoring between baby boomer and millennial tech professionals. Kylee Peña of Bling Digital and Blue Collar Post Collective and Meaghan Wilbur of IncitefulMedia will discuss why diversity programs fail and how to fix them. John McCoskey of Eagle Hill Consulting — and former EVP and CTO at the Motion Picture Association of America (MPAA) — will present his paper, “A Formal Approach to Change Management for Dynamic Technology-Driven Media Organizations.”

  • Thursday, Aug. 10, 2017
Facebook envisions Watch feature as TV for social media
This image provided by Facebook shows a screenshot demonstrating Facebook's new Watch feature, which is dedicated to live and recorded video. The idea is to have fans commenting and interacting with the videos. The new Watch section is a potential threat to Twitter, YouTube, Netflix and other services for watching video. (Courtesy of Facebook via AP)
NEW YORK (AP) -- 

Facebook envisions its new Watch feature as TV designed for social media, a place where users comment, like and interact with show creators, stars and each other — and never leave.

It's a potential threat to Twitter, YouTube, Netflix and other services for watching video, including old-fashioned TV. Yet its success is far from guaranteed.

While people watch a lot of videos on Facebook, these are mostly shared by their friends, seen as users scroll down their main news feed.

Getting people to see Facebook as a video service is like Walmart trying to sell high fashion, or McDonald's peddling high-end food, said Joel Espelien, senior analyst with The Diffusion Group, a video research firm.

Sure, it's possible, but something is off.

"It's very difficult to change people's core perception of what your brand is," he said.

Facebook has already had a special video section, but it mainly shows a random concoction of "suggested" videos. The new Watch section replaces this. Some U.S. users got Watch on Thursday; others will get it over time.

The idea behind Watch is to let people find videos and series they like, keep up with them as new episodes appear, and interact with the show's stars, creators and other fans. People's own tastes, as well as those of their friends, will be used to recommend videos.

Daniel Danker, a product director for video at Facebook, said the most successful shows will be the ones that get people interacting with each other. "Live does that better than almost anything," he said.

Facebook wants to feature a broad range of shows on Watch, including some exclusive to Facebook. Users who already follow certain outlets, say, BuzzFeed, will get recommended shows from those pages.

But Espelien wonders whether Facebook users will tap (or click) the Watch tab when with another tap of the finger they can "click over to Hulu or Netflix or whatever."

Though Facebook might want you to think otherwise, Espelien said there's no boundary keeping you from straying.

Advertising details are still being hashed out, but typically the shows will have five to 15-second ad breaks. Facebook said show creators will decide where the ads go, so they can be inserted during natural breaks.

But it might be a tough sell for advertisers used to a predictable, reliable audience that television has had, Forrester Research analyst Jim Nail said in an email. Facebook's big challenge, he said, will be to train users "to establish a Watch habit."

  • Tuesday, Aug. 8, 2017
Meredith Corp. to standardize stations on Avid’s MediaCentral Platform
BURLINGTON, Mass. -- 

U.S. media group Meredith Corporation has chosen to standardize its workflow on Avid’s MediaCentral® Platform. Over a six-year period, Avid will upgrade 10 stations, install new Avid workflows at two additional stations, and enable Meredith to migrate to a virtualized environment, reducing costs and boosting efficiency while also benefiting from the advantage of adopting a common platform across the enterprise.
 
Meredith’s Local Media Group includes 17 owned or operated television stations reaching 11 percent of U.S. households. Meredith’s portfolio is concentrated in large, fast-growing markets, with seven stations in the nation’s Top 25--including Atlanta, Phoenix, St. Louis and Portland--and 13 in Top 50 markets. Its stations produce 700 hours of local news and entertainment every week, delivering 24/7 news coverage on digital, mobile and broadcast platforms in large, high-growth markets. Faced with the pressures of operating in a digital environment, Meredith needed to upgrade its aging infrastructure and reduce expenditures. A mix of disparate news production equipment at different stations made technology upgrades, support, training and planning complicated and expensive.
 
Meredith’s enterprise-wide adoption of Avid’s MediaCentral Platform will help the media company overcome these challenges. With a single platform across the enterprise and planned upgrades every two years, Meredith’s stations will benefit from advanced tools and workflows for enterprise-wide search and content sharing, and for embracing social media. 
 
“Avid is a leader in the broadcast news industry and has been a trusted partner for many years,” said Larry Oaks, VP of Technology at Meredith. “By standardizing on Avid’s platform, we have a one-stop shop for all our technology, support and training needs across our newsrooms, which will enable us to reduce costs, save a great deal of time and effort, and give us the tools we need to succeed in today’s digital environment.”
 
Meredith’s new workflow comprises Avid’s comprehensive tools and workflow solutions to create, deliver and optimize media, including Avid NEXIS®, the media industry’s first and only software-defined storage platform, MediaCentral | UX, the cloud-based, web front end for the MediaCentral Platform, Avid Interplay® | Production for asset management, and Avid iNEWS® and iNEWS | Command for newsroom management. Meredith will use Media | Distribute to deliver content to social media channels, as well as Media Composer® | Cloud Remote and Media Composer | NewsCutter® Option for nonlinear editing, and Avid AirSpeed® video servers. Avid Professional Services will provide installation, support and customized enterprise-wide training.
 
“Meredith is the latest member of Avid’s growing community of preeminent customers to adopt an enterprise-wide single platform approach,” said Jeff Rosica, president at Avid. “With Avid’s flexible commercial options and deployment models, Meredith can keep its stations and staff at the forefront of technology, virtualize its infrastructure, and respond quickly to new challenges and opportunities--all while reducing costs.” 

  • Saturday, Aug. 5, 2017
Academy Investigates 11 scientific & technical areas for 2017 Oscars
LOS ANGELES -- 

The Academy of Motion Picture Arts and Sciences has announced that 11 distinct scientific and technical investigations have been launched for the 2017 Oscars®.

These investigations are made public so individuals and companies with devices or claims of innovation within these areas will have the opportunity to submit achievements for review.

The deadline to submit additional entries is Tuesday, August 15, at 5 p.m. PT.  The Academy’s Scientific and Technical Awards Committee has started investigations into the following areas:

  • Systems using multiple, stabilized, synced cameras to capture background footage, with integrated playback for simulating movement in static vehicles
  • Submersible, telescoping camera cranes
  • Automated systems for cinema auditorium quality control
  • Systems for onset digital dailies with color managed workflows
  • Systems for onboard RAW recording for digital cinema cameras
  • Gyroscopically stabilized camera platforms for aerial cinematography
  • Systems for modular character rigging enabling large scale, complex, high quality 3D digital character animation
  • Systems for digital storyboarding and story reel development
  • Efficient systems for interactive animation of large numbers of high-resolution 3D characters with full surface detail
  • Single surface audio platforms for automated dialogue replacement (ADR).
  • Software applications to synthesize complex sound scenes from a limited set of source elements

Claims of prior art or similar technology must be submitted online here.  

After thorough investigations are conducted in each of the technology categories, the committee will meet in November to vote on recommendations to the Academy’s Board of Governors, which will make the final awards decisions.

The 2017 Scientific and Technical Awards Presentation will be held on Saturday, February 10, 2018.

The 90th Oscars will be held on Sunday, March 4, 2018, at the Dolby Theatre® at Hollywood & Highland Center® in Hollywood, and will be televised live on the ABC Television Network at 7 p.m. ET/4 p.m. PT.  The Oscars also will be televised live in more than 225 countries and territories worldwide.

 

  • Wednesday, Aug. 2, 2017
RED RAVEN Camera Kit available via Apple.com
The RED RAVEN Camera Kit
IRVINE, Calif. -- 

RED Digital Cinema has announced that its RED RAVEN Camera Kit is now available exclusively through Apple.com and available to demo at select Apple Retail Stores. This complete handheld camera package features a diverse assortment of components from some of the industry’s top brands, including:

·      RED RAVEN 4.5K camera BRAIN

·      RED DSMC2 Touch LCD 4.7” Monitor 

·      RED DSMC2 Outrigger Handle

·      RED V-Lock I/O Expander

·      RED 120 GB RED MINI-MAG

·      Two IDX DUO-C98 batteries with VL-2X charger

·      G-Technology ev Series RED MINI-MAG Reader

·      Sigma 18-35mm F1.8 DC HSM | Art

·      Nanuk heavy-duty camera case

·      Final Cut Pro X

·      foolcontrol iOS app for RAVEN Camera Kit

 

The RED RAVEN Camera Kit is available for $14,999.95. Customers can buy this package or learn more at Apple.com and select Apple Retail Stores.

“We are very excited to work with Apple on the launch of the RED RAVEN Camera Kit, available exclusively through Apple.com,” said Jarred Land, president of RED Digital Cinema. “The RED RAVEN Camera Kit is a ready-to-shoot professional package that gives content creators everything they need to capture their vision with RED’s superior image capture technology.”

The RAVEN 4.5K is RED’s most compact camera BRAIN, weighing in at just 3.5 lbs. This makes it a great choice for a range of applications including documentaries, online content creation, indie filmmaking, and use with drones or gimbals. The RAVEN is equipped with a 4.5K RED DRAGON sensor, and is capable of recording REDCODE RAW (R3D) in 4.5K at up to 120 fps and in 2K at up to 240 fps. RED RAVEN additionally offers incredible dynamic range, RED’s renowned color science, and is capable of recording REDCODE RAW and Apple ProRes simultaneously—ensuring shooters get the best image quality possible in any format.

The RED RAVEN Camera Kit also includes Final Cut Pro X which features native support for REDCODE RAW video, built-in REDCODE RAW image controls, and the most complete ProRes support of any video editing software. Together with the free RED Apple Workflow software, Final Cut Pro allows professional video editors to work quickly and easily with RED RAVEN footage on MacBook Pro, iMac, and Mac Pro systems.

  • Friday, Jul. 28, 2017
Faceware Technologies announces Faceware LiveSDK
LOS ANGELES -- 

Faceware Technologies, provider of markerless 3D facial motion capture solutions, has announced an SDK for its real-time facial mocap and animation technology, Faceware Live. The Windows Native C++ SDK, will enable developers and creatives to build their own real-time, interactive applications. SDK users can allow live player-to-player chat in games, live interactive displays and activations, and even integrate the SDK into their own production tools and processes. Faceware will be speaking about the capabilities of the SDK at SIGGRAPH 2017 (Booth 741) from Aug 1-3.

“With the rise in VR/AR/MR, interactive marketing, and the use of CG, we’re seeing a growing number of inquiries from many different markets,” said Peter Busch, vice president of business development at Faceware Technologies. “Rather than addressing each and every request, we’ve created a SDK to enable developers to develop the tools they need to meet their own needs. We’ve got some amazing use cases I can’t wait to talk about.”

Features of the new SDK include:

  • Windows Native C++ 
  • High-frame-rate tracking, with no visible latency
  • Over 100 APIs developers can use to track and animate faces in real time
  • Create facial animation in real time from a person’s face on video
  • Tracks 82 landmarks on the face and streams over 40 animation controls
  • One second camera-to-face calibration
  • SDK can track facial movement from a live camera feed, a video file (e.g .mov file), or an image sequence (e.g. .jpg) 
  • Works with almost any camera or webcam, including head-mounted cameras
  • Easy to adjust camera settings for optimizing the user experience
  • Tools to multiply and adjust animation output values to match your characters
  • Simulate animation output for easy debugging and testing your character animation before use

“We’re really excited to put our real time facial tracking technology directly into the hands of developers,” said Jay Grenier, director of software and technology at Faceware. “Faceware Live has or is being used for a number of real-time applications, such as Hasbro’s live-streamed social media announcement for Monopoly and the recent Macinness-Scott installation at Sotheby’s ‘Art of VR’ event in New York. And now, with Faceware LiveSDK, the community is about to get a fantastic new tool to develop their own amazing applications.”