Multichannel and Multimedia Audio Distribution Demo scheduled for NY AES Convention

Elizabeth Cohen

Unless, you've been sequestered in a nightclub basement for the last few years, it should be apparent that the future of music lies online. Streaming real-time audio over wide-area networks, Ripping audio to your hard drive, Downloading files from the Internet, Playing your own compilations, and Burning your own music CD's have become common activities. As of mid July, Mp3.com was averaging 325,000 hits a day offering a library of 100,000 songs from 18,000 musicians mostly unsigned by record labels. Liquid Audio has enabled over 1,300 musicians from 300 labels to publish, syndicate and sell secure music to consumers over the Internet. RealAudio's Real Player has registered over 65 million unique users. They cite their average download rate as exceeding 175,000 per day, an increase of more than 270% since the beginning of 1997. Every week, over 145,000 hours of audio are broadcast over the Internet using RealSystem technology. Whatever the approach, whomever the audience, the music bits are coming. No, make that the music bits are playing and in stereo.

As the Gartner Group reports, "Recent technology advances have made online distribution of music a fait accompli." While the battle rages between the widely available, free, MP3 open protocol and the music industry's SDMI response, those of us in multichannel must look to the next generation of possibilities for the future of multichannel also lies online.

To date, network quality has tended to limit the range of formats and quality of the audio available. Using advanced networks, it is possible to overcome some of these limitations, and implement applications that involve the transmission of higher bandwidth multi-channel audio content in real time.

With this potential in mind, members of the AES Technical Committee for Networked Audio Systems (TCNAS) is organizing a Demonstration of Multichannel Audio Distribution and a Workshop/Panel (described below) for the upcoming AES convention in New York City on Sunday, Sept. 26. The panel is scheduled for 2-4 PM at the Javits Center and the subsequent demonstration will be held at NYU.

The panel will consist of experts with backgrounds in Advanced Networks including engineers, audiovisual applications designers/users, musicians, and network standards/policy specialists. The event will be co-chaired by Atau Tanaka and Zack Settel of McGill University. It will include a general discussion of the demonstration (described below), followed by a roundtable discussion. The scheduled panel topics include:

The Demonstration

The Advanced Network demonstrations will take place in a theater space at New York University (NYU), where dancers from NYU will perform to music provided remotely by a jazz band playing live at McGill University in Montreal.

The general demo plan is for music to be acquired as a multi-channel audio signal and streamed to NYU across a high-performance network managed by the Canarie (Canada) and Internet 2 (USA) corporations. Compressed and uncompressed multi-channel audio transmission of different sampling rates and word size will be featured.

The underlying software for the demonstration was developed at McGill University by a team involving several members of the AES Technical Committee on Network Audio Systems (TCNAS).

The TCNAS group has designed two major demos.

In the first demo, the audio stream will consist of an AC3 (encoded Dolby 5.1) stream (48kHz, 16bit) using roughly around 1.5 mbs of bandwidth. DEMO1 makes use of EXISTING encoding and decoding hardware. The audio stream in DEMO2 will consist of a 6-channel, 96kHz, 24bit, uncompressed audio stream, using roughly 13Mbs of bandwidth. Demo2 will make use of encoding and decoding hardware currently being developed by DCS Ltd., in the UK. The audio in DEMO2 is not compressed. The format of the DEMO2 device's bitstream I/O is currently under discussion.

In both demos, the plan is for a multi-channel analog signal to be converted to a digital signal using a stand-alone hardware device. The bitstream output of the device is then sent to a host workstation, where the bitstream is packetized (IP) and then transmitted to a distant location (via Canarie2 and Internet2 nets) where the packets are collected on another workstation, and the bitstream reassembled. The bitstream is then output to a hardware decoder, and the "original" multi-channel analog signal is recovered. An MPEG-1 image stream will also be streamed in parallel, from host workstation to the remote location.

The software running on the workstation is currently being developed at McGill. Beyond creating and collecting IP packets, the software will synchronize (at the transmit stage) the video stream to the audio stream. The software development team is being lead by Professor Jeremy Cooperstock, Center for Intelligent Machines, McGill University.

Unlike DEMO1, DEMO2's performance will depend on the availability of sufficient sustained high-bandwidth (approximately 13mbps) between Montreal and NYC. The ability and reliability of securing and maintaining this bandwidth for the performance duration will be a key element of this demo/test. This demo can be seen as an essential step in solving network collaboration issues. Choreographing the coordination dance between CANARIE and INTERNET2 is vital to the performance .

Another issue that the demo will explore is the monitoring of signal quality. In order to develop a better understanding of dropout related to network quality, they will record the signals acquired at NYU.

Testing of the demonstration network is currently underway. Professor Cooperstock is in the process of designing a testing demon that will be able to report statistically on network performance. The nature of this demo is to push the boundaries of the possible, so both the results of dress rehearsals and the event itself should be of interest to the Multichannel community.

A point-to-point connection will be used (as opposed to an MBONE-type distribution). No monitoring of the event will be possible from any other location than NYU. At present, we are transmitting to a single network address. However, it would be a trivial matter to change this to multicast and allow for multiple simultaneous receivers, without impacting quality at the primary receiving end. For a reference, see AFDP.