Thursday, October 19, 2017

Toolbox

  • Tuesday, Oct. 17, 2017
ANICA FLASH automates editing workflows with Telestream Post Producer and Vantage
Telestream resources being deployed at ANICA FLASH
NEVADA CITY, Calif. -- 

Rome-based ANICA FLASH, the producer and distributor of movie trailers to a network of leading Italian TV and radio broadcasters, has automated its editing and graphics workflows using Telestream technology. Working with local channel partner, Allyn, ANICA FLASH has created a custom automated media processing workflow using Telestream Post Producer, Vantage and Vidchecker products.

Since 1977 ANICA FLASH has become an essential news source for Italian movie lovers, providing information about which movies are being shown at which theaters and when. Through its web site www.Comingsoon.it, movie trailers are distributed to 18 of Italy’s television stations, each of which requires specific customization and format changes in layout, graphics, duration and final format. Normally, such a workflow requires a considerable amount of skilled operator time.

With a goal to optimize staff productivity, ANICA FLASH took the strategic decision to automate many elements of content production using Telestream’s Vantage Media Processing Platform. Allyn configured Telestream Post Producer to provide a workflow that uniquely addresses the production needs of ANICA FLASH. This workflow starts with a simple spreadsheet where the scheduled elements are identified to produce multiple versions of the master edit. This takes advantage of the unique “CSV workorder” functions in the underlying Vantage workflow framework. The spreadsheet includes details such as the name of the TV channel and the name of the clips to be inserted in sequence.

Post Producer automatically assembles and conforms the edits based on the elements defined in the spreadsheet, thereby creating as many variants of the master edit as required.  

In the same workflow, with no added manual intervention, other technical processing operations are performed, including automatic measurement and correction of loudness in accordance with EBU R128, and encoding to all of the detailed technical requirements of each broadcaster. The workflow value is further enhanced by integration of Telestream Vidchecker Automated Quality Control (QC) Software, which analyzes the quality of final deliverable files and certifies compliance of the final product.

The system is able to produce multiple trailers simultaneously and return the final product within a few minutes. Technical compliance of the material to be distributed is guaranteed, minimizing problems caused by non-conforming content, resulting in significant time and cost savings for ANICA FLASH, whose operators can monitor the workflow online in real-time.

“Working closely with the team at Allyn, we have created an elegant automated workflow which significantly increases our efficiency and reduces our costs,” commented Paolo Cialfi, managing director at ANICA FLASH. “Telestream Post Producer has proved itself to be an elegant and sophisticated workflow automation tool, which will support our creative teams and enterprise operations, helping us to grow our business.”

  • Tuesday, Oct. 17, 2017
Cintel Scanner helps preserve thousands of film reels from Cypress Gardens’ archive
Just a sampling of the Cypress Gardens Treasure reels being preserved by the Cintel Scanner and DaVinci Resolve Studio
FREMONT, Calif. -- 

Blackmagic Design has announced that its Cintel Scanner and DaVinci Resolve Studio are being used by Paul Gerrish, of Cypress Gardens Treasure, to scan close to 6,000 reels of 16mm and 35mm film from the 1930s through to the 1980s. Cypress Gardens Treasure has acquired the film and photo archive from Cypress Gardens, a botanical garden and theme park that was one of the biggest attractions in Florida and known as the "Water Ski Capital of the World" for its water ski shows before it closed in 2009.
 
With a mission of preserving and providing access to the history of Cypress Gardens, Gerrish sought a solution that was both economical and safe for the film. “I wasn’t sure what the value of the collection would be, so economy was a big concern. Sending film out to be scanned was never an option as the cost would have far exceeded the price of a Cintel Scanner,” he said. “So far, I have scanned close to 300 reels without any problems, including shrinkage issues, which were my biggest concern. One 35mm nitrate film was even found and scanned on the Cintel; the film was from the 1940s but is clear enough to seem like it was filmed yesterday.”
 
“I generally scan most films at 24fps unless the film looks to be less than prime, in which case I will run it at about 12fps,” he noted. “Sometimes the reels are loaded with the ‘tail’ first, but rather than rewinding old reels, the Cintel can scan them as is. DaVinci Resolve Studio has a nice feature that allows me to change the speed to ‘reverse’ after the scan, and the film is automatically adjusted. The same goes for film that is flipped. One simple button in the color section will automatically adjust it so that the film is correct even if the initial scan was done backwards or reversed.”
 
Once the scan has been completed, Gerrish moves it to the timeline and proceeds to color. “There I can correct the film orientation, and most importantly, I can size the film to eliminate any gaps that would be seen on a widescreen. I adjust color using DaVinci Resolve Studio and then move to deliver,” he explained. “I don’t worry about making too many edits because I want to preserve the footage as is. This means I can also keep the Cintel running more often and can clean the rollers while it is rendering.”
 
Once the project is complete, Gerrish should have around 4,000 unique films. “The majority of these films have not been viewed for more than 50 years and are one-of-a-kind,” he said. “There was an episode of ‘The Ed Sullivan Show’ that featured the Cypress Gardens water skiers, and there are 50 or more reels of raw footage that were shot for it. There are also 12 reels of Johnny Carson’s visit to the park in 1968, film from three of King Hussein of Jordan’s visits, and I even have footage of Edward VIII and Wallis Simpson from their visit back in 1957. These are all amazing artifacts and would otherwise be lost without the Cintel Scanner.”
 
For Gerrish, however, it’s not just about high profile visitors. “I’ve already had people contact me about seeing family members in the footage. One person had been asking the park for the past forty years to find a film of her husband. I was able to find that specific reel, scan, convert and deliver it to her the next day,” he said.
 
According to Gerrish, with discussions taking place around a possible documentary about the park, having the digital film files will be invaluable, making it easy to preview and organize such a project. In the meantime, Gerrish makes DVDs of the footage and uploads many of the films to YouTube.

  • Friday, Oct. 13, 2017
Cinematographers play key role in Tech Emmy win for Fujinon Cine Zooms
During Fujinon Day Atlanta, Bill Wages, ASC (r), gives feedback on FUJINON Cabrios with Radames Gonzalez from Arri Rental in lens projection room.
WAYNE, NJ -- 

The Optical Devices Division of FUJIFILM has been awarded  an Engineering Emmy® for its FUJINON “4K Cine Zoom Lenses providing imagery in television” by the Television Academy, and will receive the honor at the Academy’s October 25 Engineering Awards ceremony at Loews Hollywood Hotel. The introduction of FUJINON’s Cabrio and Premier series of cinema zoom lenses brought about the ability to cover Super 35mm imagers and efficiently shoot the full gamut of television production without sacrificing image quality.

“The willingness of some of the top cinematographers and their rental houses to test, explore and provide feedback about our lenses is an integral part of this Emmy win,” states Thomas Fletcher, director of sales, FUJIFILM Optical Devices Division. “They’re a very loyal group, devoted to their lens choice. To test a new cinema lens is not something that’s considered lightly. Winning an honor as prestigious as an Emmy is an affirmation of Fujifilm’s dedication to the art and craft of cinematography. We thank the Academy for their recognition of our work and for the support we’ve received from the cinematography community.”

In fact, two cinematographers won Creative Arts Emmys this year using FUJINON cine zooms: David Miller, ASC, for Veep won Outstanding Cinematography for a Single-Camera Series (Half Hour) honors; and Donald A. Morgan, ASC, was awarded an Emmy for The Ranch in the Outstanding Cinematography for a Multi-Camera Series category.

Others who’ve embraced the new FUJINON zoom lenses include three-time ASC Award winner William Wages, ASC (Burn Notice, Containment, Revolution, Sun Records). For Wages, the FUJINON Cabrio 19-90 and 85-300mm zooms have “changed the way I shoot.”  Wages added: “With their speed alone, they’re virtually the only lenses I’m using. The optical quality, small size and speed are unequaled. The combination of these two lenses are ideal for television production.” Wages is an ASC recipient of the Career Achievement in Television honor. 

This marks the sixth Engineering Emmy award granted to Fujifilm and Fujinon. Past awards include:

  • “Development of the new high-speed color negative film A250 Color Negative Film” in 1982
  • “Developments in Metal Tape Technology” in 1990
  • “Implementation in Lens Technology to Achieve Compatibility with CCD sensors” in 1996
  • “Lens technology developments for solid state imagers cameras in high definition formats” in 2005
  • The world's first autofocus system, "Precision Focus," in 2009
  • Wednesday, Oct. 11, 2017
Facebook gets real about broadening virtual reality's appeal
In this Jan. 6, 2016, file photo, Peijun Guo wears the Oculus Rift VR headset at the Oculus booth at CES International in Las Vegas. (AP Photo/John Locher, File)
SAN FRANCISCO (AP) -- 

Facebook CEO Mark Zuckerberg seems to be realizing a sobering reality about virtual reality: His company's Oculus headsets that send people into artificial worlds are too expensive and confining to appeal to the masses.

Zuckerberg on Wednesday revealed how Facebook intends to address that problem, unveiling a stand-alone headset that won't require plugging in a smartphone or a cord tethering it to a personal computer like the Oculus Rift headset does.

"I am more committed than ever to the future of virtual reality," Zuckerberg reassured a crowd of computer programmers gathered in San Jose, California, for Oculus' annual conference.

Facebook's new headset, called Oculus Go, will cost $199 when it hits the market next year. That's a big drop from the Rift, which originally sold for $599 and required a PC costing at least $500 to become immersed in virtual reality, or VR.

Recent discounts lowered the Rift's price to $399 at various times during the summer, a markdown Oculus now says will be permanent.

"The strategy for Facebook is to make the onboarding to VR as easy and inexpensive as possible," said Gartner analyst Brian Blau. "And $199 is an inexpensive entry for a lot of people who are just starting out in VR. The problem is you will be spending that money on a device that only does VR and nothing else."

Facebook didn't provide any details on how the Oculus Go will work, but said it will include built-in headphones for audio and have a LCD display.

The Oculus Go will straddle the market between the Rift and the Samsung Gear, a $129 headset that runs on some of Samsung's higher-priced phones. It will be able to run the same VR as the Samsung Gear, leading Blau to conclude the Go will rely on the same Android operating system as the Gear and likely include similar processors as Samsung phones.

The Gear competes against other headsets, such as Google's $99 Daydream View, that require a smartphone. Google is also working on a stand-alone headset that won't require a phone, but hasn't specified when that device will be released or how much it will cost.

Zuckerberg promised the Oculus Go will be "the most accessible VR experience ever," and help realize his new goal of having 1 billion people dwelling in virtual reality at some point in the future.

Facebook and other major technology companies such as Google and Microsoft that are betting on VR have a long way to go.

About 16 million head-mounted display devices were shipped in 2016, a number expected to rise to 22 million this year, according to the research firm Gartner Inc. Those figures include headsets for what is known as augmented reality.

Zuckerberg, though, remains convinced that VR will evolve into a technology that reshapes the way people interact and experience life, much like Facebook's social networks and smartphones already have. His visions carry weight, largely because Facebook now has more than 2 billion users and plays an influential role in how people communicate.

But VR so far has been embraced mostly by video game lovers, despite Facebook's efforts to bring the technology into the mainstream since buying Oculus for $2 billion three years ago.

Facebook has shaken up Oculus management team since then in a series of moves that included the departure of founder Palmer Luckey earlier this year.

Former Google executive Hugo Barra now oversees Facebook's VR operations.

  • Tuesday, Oct. 10, 2017
Digital Nirvana to demo media management wares at NAB Show NY
Digital Nirvana will showcase its sports clipping service at NAB New York
FREMONT, Calif. -- 

Digital Nirvana will showcase its full suite of media management products and services at the upcoming NAB Show New York. Booth highlights will include closed captioning solutions, automated sports clipping service, and the newest version of the MonitorIQ media management platform. NAB Show New York takes place October 18-19 at the Javits Convention Center, and Digital Nirvana will exhibit in booth N662.

“We enjoy NAB New York because it gives us a chance to connect with many regional current and potential customers that may not have attended recent international shows, such as IBC,” said Hiren Hindocha, president and CEO, Digital Nirvana. “We create smart media management solutions that streamline workflows, and our newest solutions and services were developed in response to consumer demand in the ever-changing broadcast and content creation landscape.”

One booth highlight will be the company’s all-in-one automated sports clipping service. Introduced earlier this year, the service enables broadcasters to easily capture and share every fast-paced moment in a game. Offering a state-of-the-art workflow and customization options, the service automatically analyzes sports broadcasts in real-time and generates ready-to-publish clips of those highlights. Digital Nirvana’s sports clipping service is coupled with automated caption synchronization, enabling sports broadcasters to publish sports media content online and via social media without any considerable time delay while complying with all FCC regulations. Watch this video to learn more about the sports clipping service. 

Another NAB Show New York highlight will be the company’s cloud-based closed captioning, subtitling, and video logging services. Offering postproduction, pop-on, and roll-up captioning services, the company offers high-quality caption generation for all pre-recorded and online video content through an automated process over the cloud. Digital Nirvana’s cloud-based caption synchronization technologies use audio fingerprinting to automate near-live synchronization of live broadcast captions. Automated speech-to-text conversion, coupled with state-of-the-art workflow and experienced captioners, reduces the time and cost to publish, provides better search engine discoverability - while complying with FCC guidelines.

On the product side, Digital Nirvana will showcase its MonitorIQ media management platform, which delivers a full range of multi-channel signal monitoring, repurposing, logging, compliance and archiving functions. The latest version of MonitorIQ, V5.0, features cloud-based recording, OTT stream monitoring functions, and HTML5 and HTTP Live Streaming (HLS) support, and incorporates the ability to record from Matrox’s Monarch HDX streaming appliance. Digital Nirvana will also showcase its standalone media management products, including the CAR/TS (Capture, Analyze, Replay – Transport Stream) transport stream recorder, which records and monitors the transport stream, provides alerts of non-compliance, offers time-shifted playout, and allows users to cut segments and export section of the transport stream for more detailed analysis. Other standalone product highlights include AnyStreamIQ for cloud-based OTT monitoring and MediaPro for content repurposing.

  • Friday, Oct. 6, 2017
RED Digital Cinema unveils Monstro 8K VV sensor
RED WEAPON with MONSTRO sensor
IRVINE, Calif. -- 

RED Digital Cinema® announced a new cinematic full frame sensor for WEAPON® cameras, MONSTRO™ 8K VV.  MONSTRO is an evolutionary step beyond the DRAGON 8K VV sensor with improvements in image quality including dynamic range and shadow detail.

This newest camera and sensor combination, WEAPON 8K VV, offers full frame lens coverage, captures 8K full format motion at up to 60 fps, produces ultra-detailed 35.4 megapixel stills, and delivers incredibly fast data speeds — up to 300 MB/s. And like all of RED’s DSMC2 cameras, WEAPON shoots simultaneous REDCODE® RAW and Apple ProRes or Avid DNxHD/HR recording and adheres to the company’s dedication to OBSOLESCENCE OBSOLETE® — a core operating principle that allows current RED owners to upgrade their technology as innovations are unveiled and move between camera systems without having to purchase all new gear.

“RED’s internal sensor program continues to push the boundaries of pixel design and MONSTRO is the materialization of our relentless pursuit to make the absolute best image sensors on the planet,” said Jarred Land, president of RED Digital Cinema. “The Full Frame 8K VV MONSTRO provides unprecedented dynamic range and breathtaking color accuracy with full support for our IPP2 pipeline.”

The new WEAPON will be priced at $79,500 (for the camera BRAIN) with upgrades for carbon fiber WEAPON customers available for $29,500. MONSTRO 8K VV will replace the DRAGON 8K VV in RED’s line-up, and customers that had previously placed an order for a DRAGON 8K VV sensor will be offered this new sensor beginning today. New orders will start being fulfilled in early 2018.

RED has announced a comprehensive service offering for all carbon fiber WEAPON owners called RED ARMOR-W. RED ARMOR-W offers enhanced and extended protection beyond RED ARMOR, and also includes one sensor swap each year.

Additionally, RED has made its enhanced image processing pipeline (IPP2) available in-camera with the company’s latest firmware release (v7.0) for all cameras with HELIUM and MONSTRO sensors. IPP2 offers a completely overhauled workflow experience, featuring enhancements such as smoother highlight roll-off, better management of challenging colors, an improved demosaicing algorithm, and more. 

  • Wednesday, Oct. 4, 2017
Frame.io gains infusion of capital, looks to advance technology
Frame.io review page
NEW YORK -- 

Frame.io, developers of the video review and collaboration platform for content creators, has raised $20 million in Series B growth funding led by FirstMark Capital. The latest round of funding was also supported by return backers Accel Partners, SignalFire and Shasta Ventures. This latest infusion of capital brings Frame.io’s total funding to date to $32 million, which the startup will use to develop in key areas including the core video review and collaboration product, cloud and content security, and the Frame.io developer ecosystem. Founded in 2014, Frame.io is also backed by Hollywood heavyweights Jared Leto and Kevin Spacey.

Emery Wells, co-founder and CEO of Frame.io., described the new round of funding as “an exciting milestone for our team, and more importantly our community, and will help us in elevating our mission from reimagining video collaboration to reimagining postproduction itself.”

Frame.io makes the process of sharing and collaborating on video projects incredibly simple, through an intuitive user interface where users can upload and organize projects, then share internally or with clients to review and add feedback. Used by leading media and entertainment companies including TechCrunch, BBC, Vice, The Onion and Facebook, Frame.io has helped countless organizations in the transition to video, as more and more companies implement video into their branding strategies.

With the Series B funding, Frame.io will be making a sizable investment in iterating the core product, with a significant focus on cloud and content security. Trusted by some of the world’s largest media corporations, security is top of mind for leaders in the industry; as such, security has become a core pillar of the Frame.io product offering, and will continue to expand with features such as watermarking and a host of new security/compliance certifications including MPAA. This investment in security will also extend to the inclusion of artificial intelligence and machine learning into the core and enterprise product roadmap.

“Artificial Intelligence is going to play a huge role in Frame.io’s future,” stated Matthew Ruttley, head of data at Frame.io, who spearheads the company’s data science initiatives. “Enterprise customers will benefit from a whole host of powerful, proprietary Machine Learning systems. These apply to everything from streamlining video review workflows, to robust, all-important security features.”

2017 has been a year of milestones for the New York City-based Frame.io, with the release of Frame.io 2 followed by the official launch of Frame.io Enterprise--the company’s enterprise-grade product designed to help the largest media clients, including Turner Broadcasting Systems and Buzzfeed, collaborate at scale. With over 370,000 users (and counting) in over 170 countries, Frame.io will be using this investment to double down on strategic product innovation, offering content creators a platform that connects the many different creative tools, publishing tools, stock services, asset management and storage systems, and many other specialty products involved in the business of creating video.

The new funding will also help Frame.io expand its rapidly growing team, which has doubled in the past year, across the board.

  • Wednesday, Oct. 4, 2017
Corso, Kalas, Silverman, Yedlin among new members of Academy’s Science and Tech Council
Leon Silverman, general manager, Digital Studio for the Walt Disney Studios
BEVERLY HILLS, Calif. -- 

Nafees Bin Zafar, Maryann Brandon, Bill Corso, Andrea Kalas, Ai-Ling Lee, Leon Silverman and Steve Yedlin have accepted invitations to join the Science and Technology Council of the Academy of Motion Picture Arts and Sciences, bringing the Council’s 2017–2018 membership roster to 25.

Bin Zafar, a technology development supervisor at Digital Domain, has worked in live-action visual effects and feature animation for the past 17 years. He received a 2007 Academy Scientific and Engineering Award for his work on the development of Digital Domain’s fluid simulation system, and a 2014 Academy Technical Achievement Award for the development of large-scale destruction simulation systems. His software has been used in a diverse set of films, including “Pirates of the Caribbean: At World’s End,” “2012,” “The Croods” and “Kung Fu Panda 3.” He also serves on the Academy’s Scientific and Technical Awards Committee and the Digital Imaging Technology Subcommittee. He became a member of the Visual Effects Branch in 2017.

Film editor Brandon earned an Oscar® nomination for her work on “Star Wars: The Force Awakens.” Her credits include such films as “Star Trek,” “Star Trek Into Darkness” and “Passengers,” and she is currently working on the feature “The Darkest Minds” for 20th Century Fox. Brandon has been a member of the Academy since 1998, and also is active in the Directors Guild of America (DGA), American Cinema Editors (ACE) and Women in Film (WIF). This month she appears in TCM’s “Trailblazing Women” series.

Corso is an Oscar-winning makeup artist and designer, whose recent credits include “Deadpool,” “Kong: Skull Island,” “Bladerunner 2049” and the upcoming “Star Wars: The Last Jedi.” His desire to bridge the gap between practical, on-set makeup and digital technology led him to create Digital Makeup Group (DMG), specializing in CG beauty work, age manipulation and makeup effects done from an expert makeup artist’s perspective. Corso has been a member of the Academy since 2004 and has served as governor of the Makeup Artists and Hairstylists Branch and chair of its executive committee. He has also served on the Academy’s Preservation and History Board Committee.

Kalas, vice president of archives at Paramount Pictures, has restored or preserved more than 2,000 films and is a technical innovator in systems for digital preservation and archive-based analytics. She also is a public advocate for preservation and film history through the Association of Moving Image Archivists, where she currently serves as president. She recently joined the Academy as a Member-at-Large.

Born in Singapore, sound designer Lee earned Oscar nominations for Sound Editing and Sound Mixing for “La La Land.” Her credits include “Buena Vista Social Club,” “Spider-Man 2,” “Transformers: Dark of the Moon,” “Godzilla,” “Wild,” “Deadpool” and “Battle of the Sexes.” She has been a member of the Academy’s Sound Branch since 2014.

As general manager, Digital Studio for the Walt Disney Studios, Silverman oversees digital studio services, which provide post production on-lot infrastructure, mastering, digital distribution services and workflow expertise. He is a past president and founder of the Hollywood Professional Association (HPA), a trade association focused on the professional media content creation industry. He currently serves as governor-at-large of the Society of Motion Picture Television Engineers (SMPTE) and is an associate member of the American Society of Cinematographers (ASC) and affiliate member of ACE. He has been an Academy Member-at-Large since 2015 and serves on the Members-at-Large executive committee.

Yedlin is a cinematographer best known for his collaboration with director Rian Johnson on his films “Brick,” “The Brothers Bloom,” “Looper” and “Star Wars: The Last Jedi.” He has made ongoing contributions to industry technical awareness and education with his short film demos, papers and seminars. Yedlin has been a member of the ASC since 2015, a guest lecturer at the American Film Institute since 2011, and a member of the Academy’s Cinematographers Branch since 2016.

The returning Council co-chairs for 2017–2018 are two members of the Academy’s Visual Effects Branch: Academy governor Craig Barron, an Oscar-winning visual effects supervisor; and Paul Debevec, a senior staff engineer at Google VR, adjunct professor at the USC Institute for Creative Technologies and a lead developer of the Light Stage image capture and rendering technology, for which he received a Scientific and Engineering Award in 2009.

The Council’s 16 other returning members are Wendy Aylsworth, Academy president John Bailey, Rob Bredow, Annie Chang, Douglas Greenfield, Rob Hummel, Academy governor John Knoll, Beverly Pasterczyk, Cary Phillips, Joshua Pines, Douglas Roble, David Stump, Steve Sullivan, Bill Taylor, Academy vice president Michael Tronick and Beverly Wood.

Established in 2003 by the Academy’s Board of Governors, the Science and Technology Council provides a forum for the exchange of information, promotes cooperation among diverse technological interests within the industry, sponsors publications, fosters educational activities, and preserves the history of the science and technology of motion pictures. 

  • Tuesday, Oct. 3, 2017
Aussie broadcaster SBS to deploy Dalet Galaxy MAM and Orchestration platform
Pictured (l-r) are SBS CTO Noel Leslie, Dalet COO Stephane Schlayen, Dalet's general manager for Asia Pacific region Raoul Cospen and SBS manager Darren Farnham.
SINGAPORE -- 

Dalet, a provider of solutions and services for broadcasters and content professionals, announced that Australia’s Special Broadcasting Service (SBS) is significantly expanding the portion of its media operations powered by the Dalet Galaxy Media Asset Management (MAM) and Orchestration platform. The new implementation will facilitate production and distribution of news, sports, radio programs in multiple languages and music content across the broadcaster’s TV, radio and digital platforms. The new deployment will bolster SBS’s production capability prior to the 2018 FIFA World Cup.

Building on the successful integration of dozens of systems and automation of several program management workflows under a unified Dalet Galaxy environment, the expanded installation will now encompass news and sports production as well as the full radio automation for SBS music channels. This will deliver production content to three TV channels, eight radio channels (music and talk show), the SBS website and online apps. SBS’s radio programming is produced in more than 70 languages, making it the most linguistically diverse broadcaster globally. In addition to the 2018 FIFA Football World Cup, the production of other sporting events that will be facilitated by the new integration include tier-one events such as the Tour de France and English Premiere League.

SBS’ chief technology officer Noel Leslie, said, “Following the first phase of our strategic move to streamline our programming content under the management of a single MAM platform, we embarked on phase two in full confidence of our partnership with Dalet. SBS and Dalet teams have worked collaboratively on the software commissioning and the system integration for this project. Change management is also extremely important to us at SBS, and we have made it core to our strategy to involve key stakeholders across the chain and across our geographically spread operation right from the start of the process.”

The system will be deployed across four sites including SBS headquarters in Sydney, connected production operation in Melbourne, a third system in Canberra, and a Business Continuity (BC) / Disaster Recovery (DR) site also in Sydney.

Specifically, Dalet will unify content preparation, production and ingest at two TV studios and eight radio studios in Sydney and an additional eight radio studios in Melbourne, bringing together up to 300 simultaneous users working with the system. Video ingest for 50 channels spread across the country, alongside multiple channels of audio ingest, will be centrally managed under the control of Dalet. The Dalet AmberFin media processing platform will assist with transcoding as required.

Dalet On-the-Go will also be available to connect journalists in the field directly to the central Dalet Galaxy platform. Dalet OneCut is provided for desktop editing and remote editing at the Canberra studios. Industry-standard, BPMN 2.0-compliant Dalet Workflow Engine automates multi-platform publishing, including social media workflows, as well as archiving operations.

“There are many tangible benefits SBS will receive by further standardizing production under one unified environment; lower TCO, optimized support and training costs, fewer systems to integrate – all thanks to the powerful agility and extensibility of the Dalet Galaxy platform,” said Raoul Cospen, Dalet product manager. “Using the full scope of the Dalet platform, SBS is able to unite and streamline its content collaboration across geographically diverse SBS departments, and orchestrate the program acquisition, preparation and distribution workflows.”

The Dalet Galaxy open APIs are used for a variety of interfaces with third parties including music scheduling system Power Gold and Adobe® Premiere® Pro CC for craft editing across the three production sites, and Opta Sports data feeds. Integrations with Dell EMC® Elastic Cloud Storage (ECS™) provide SBS teams with a single user interface to easily access and manage content. Further integrations facilitated by Dalet include Ross Overdrive in SBS’s automated production studio, Ross Expression for graphics and content management system Drupal. Integrations with social media networks Facebook, Twitter and YouTube make this installation a complete end-to-end solution for SBS to address effectively their audiences across all available platforms.

  • Tuesday, Oct. 3, 2017
Andrew Shulkind to present keynote at SMPTE Technical Conference & Exhibition
Andrew Shulkind, SMPTE 2017 keynote speaker
LOS ANGELES & WHITE PLAINS, NY -- 

Technologist and award-winning cinematographer Andrew Shulkind will present the keynote at the SMPTE 2017 Annual Technical Conference & Exhibition (SMPTE 2017), which will take place Oct. 23-26 at the Hollywood & Highland Center in Los Angeles.

A co-founder of HeadcaseVR and sought-after expert on virtual reality, augmented reality, and mixed reality (VR, AR, and MR) content capture and creation, Shulkind will share his experiences in developing and using the latest immersive media technologies and techniques in his keynote, "The Immersive Future: Broaden Your Horizons."

"Long known for his artistry with visual effects and lighting, Andrew has more recently turned his natural ease with innovative technologies toward the field of VR, AR, and mixed reality," said Richard Welsh, SMPTE education VP and CEO of Sundog Media Toolkit. "In addition to shooting some of the earliest and most inventive VR projects, Andrew has worked with top advertisers, brands, and studios — as well as the U.S. military — to develop and implement VR and mixed-reality projects, and also to design and test innovative new capture systems and technologies. His keynote address will bring a fresh perspective to SMPTE 2017 on the state of immersive technology and its application in creating uniquely engaging content."

Shulkind's keynote will provide attendees with perspective on the impending media disruption, which promises exponential growth in everything from field-of-view to storage requirements, compression demands to distribution networks. The takeaway for attendees will be a clearer sense of what immersive content is, and what we can gain from shaping its successful implementation.

"Capturing and delivering content in 360 degrees is expanding the window that has framed our previous entertainment experiences. This ultimate field of view is the next natural step in a progression of immersive storytelling that is meant to maximize viewer engagement," said Shulkind. "The challenges and advantages of capturing immersive elements are evolving for this kind of experiential delivery, and how the art form of traditional content coexists and overlaps with interactivity, artificial intelligence, and gamification of entertainment. We now have the opportunity and responsibility to sustain the quality of narrative legacy and premium human craft of the best television, advertising, and movies of our past in the interactive, data-driven future." 

Prior to co-founding HeadcaseVR  in 2014, Shulkind worked in feature films and broadcast advertising for clients such as Paramount, DreamWorks, Sony Pictures, Apple, Adidas, AT&T, Budweiser, Google, Old Spice, and Samsung. He received the International Cinematographer's Guild (ICG) Emerging Cinematographer Award in 2013, Studio Daily Prime Award in 2014, and Studio Daily Top 50 Award for Creativity and Innovation in 2016. With his move into VR, Shulkind leveraged his experience working with 3D images, miniatures, and visual effects (VFX) to design a 32K RAW, 360-degree VR camera rig that today remains the industry's highest-resolution professional-grade VR acquisition device.

The keynote will be among dozens of presentations offered by subject matter experts over the course of SMPTE 2017, which will fill two exhibit halls and multiple session rooms at the Hollywood & Highland Center. The event will also feature an Oktoberfest reception, Trick-or-Treat Spooktacular cocktail reception, Broadcast Beat's SMPTE 2017 Live! Studio, and special events culminating with the SMPTE Annual Awards Gala at the Loews Hollywood Hotel's Hollywood Ballroom on Thursday, Oct. 26.