Details zum Autor

Name:
Prof. Dr. Frank Melchior  Elektronische Visitenkarte
Funktion:
Professor
Lehrgebiet:
3D-Audiosysteme
Digitale Audiosignalverarbeitung
MAX/MSP
Musikalische Akustik
Angewandte Psychoakustik
IP-basierte Mediensysteme

Grundlagen Audiotechnik
Grundlagen Elektronik
Forschungsgebiet:
Objekt-basierte Mediensysteme
Spatial Audio und 3D-Audiosysteme
Studiengang:
Audiovisuelle Medien (Bachelor, 7 Semester)
Fakultät:
Fakultät Electronic Media
Sprechzeiten:
nach Vereinbarung
Raum:
318, Nobelstraße 10 (Hörsaalbau)
Telefon:
0711 8923-2259
E-Mail:
melchior@hdm-stuttgart.de
Frank Melchior
Lebenslauf (kurz) Lebenslauf Publikationen Projekte Abschlussarbeiten

Lebenslauf (kurz)

Freiberuflicher Berater für Forschung, Technologie und Innovationen - seit 2017

BBC R&D  |  Head of Audio Research and Audio Research Partnerships | Lead Technologist - 2012 bis 2017

IOSONO GmbH   Chief Technical Officer | CTO - 2011 bis 2012

IOSONO GmbH   |  Leiter der Forschung und Entwicklung - 2009 bis 2010

Fraunhofer IDMT  |  Ingenieur für Forschung und Entwicklung, Projektleiter - 2003 bis 2009

 

Lebenslauf

Wissenschaftlicher Berater für Forschung, Technologie und Innovationen - seit 2017

  • Beratung zu medientechnologischen Innovationsprozessen im Bereich Ton und Elektroakustik
  • Unterstützung im Stakeholder Management bei komplexen Veränderungsprozessen
  • technische Beratung zur objektiven und subjektiven Qualitätsevaluation in Entwicklungsprozessen von audiovisuellen Systemen
  • Workshops und Prozessbegleitung in frühen Entwicklungsphasen
  • wissenschaftliche Expertisen und State-of-the-Art Beratung im Bereich Medientechnologie

 

BBC R&D  |  Head of Audio Research and Audio Research Partnerships | Lead Technologist - 2012 bis 2017

  • Führung und Expansion der BBC Audio Forschungsgruppe in London und Manchester
  • Entwicklung und Führung des BBD Audio Research Partnership, einer strategischen Partnerschaft zwischen BBC R&D und 5 Universitäten, zur Durchführung von Forschungsvorhaben, Projekten und gemeinsamer Einwerbung von Drittmitteln
  • Entwicklung und Realisation des Forschungs- und Entwicklungsplans unter Berücksichtigung der arbeitsgruppenübergreifenden strategischen Ziele zur Integration von neuer  Audiotechnologie in die Bereiche 360 Video, AR/VR, TV und Radio Produktion
  • Initiierung und Management von akademischen und kommerziellen Kooperationsprojekten
  • Vertretung der BBC in internationalen Standardisierungsgremien (ITU, EBU, DVB)
  • Personalführung und Recruiting für die Audio Forschungsgruppe
  • Betreuung von Doktoranden und Studenten

 

IOSONO GmbH   Chief Technical Officer | CTO - 2011 bis 2012

  • verantwortlich für die strategische und technische Zielsetzung des Unternehmens
  • Personalverantwortung (Recruiting, Personalführung) für das technische Team von mehr als zehn Ingenieuren und Entwicklern in der Produktentwicklung und Forschung
  • Entwicklung und Realisation der technologischen Roadmap in Zusammenarbeit mit den Mitgliedern des weiteren Führungsteams
  • Mitglied des Advisory Boards and Steering Committees
  • Initiierung von Kooperationen und Management von externen Technologiezulieferern
  • Vertretung des technischen Managements in Vertragsverhandlungen
  • Schaffung von Strukturen zum Projekt- und Produktmanagement sowie Controlling
  • Algorithmen- und Softwareentwicklung

IOSONO GmbH   Leiter der Forschung und Entwicklung | Director R&D - 2009 bis 2010

  • verantwortlich für die Leitung des Forschungs- und Entwicklungsteams von zwei bis fünf Ingenieuren in der Produkt- und Algorithmenentwicklung
  • Entwicklung von neuen Algorithmen zur räumlichen Audiowiedergabe basierend auf der Wellenfeldsynthese und auf Prinzipien der Psychoakustik
  • Akquise, Management und Durchführung von Forschungs- und Entwicklungsprojekten
  • Aufbau von Entwicklungsprozessen sowie neuen Laboren und Produktionsstudios
  • Installation eines Patentmanagement Systems
  • Betreuung von Master-, Diplom- und Studienarbeiten

 

Fraunhofer IDMT  |  Ingenieur für Forschung und Entwicklung, Projektleiter - 2003 bis 2009

  • Entwicklungsingenieur mit Verantwortung in Teilprojekten in industriellen und europäischen Forschungsprojekten
  • Entwicklung von Systemen und Algorithmen für die räumliche Tonwiedergabe zur Anwendung in Beschallung, Kinoton und Automotivumgebungen
  • Mitglied des Führungskreises Virtuelle Akustik
  • Promotion an der Technischen Universität Delft unter Prof. D. de Vries und Prof. Brandenburg
  • Betreuung von Studien- und Diplomarbeiten

 

Publikationen

Dissertation

[1] F. Melchior, “Investigations on spatial sound design based on measured room impulse responses,” Delft University of Technology, 2011.

Buchkapitel

[2] B. Shirley, R. Oldfield, F. Melchior, and J. M. Batke, “Platform Independent Audio,” in Media Production, Delivery and Interaction for Platform Independent Systems: Format-Agnostic Media, 2014, pp. 130–165.

Artikel

[3] P. Coleman et al., “An Audio-Visual System for Object-Based Audio: From Recording to Listening,” IEEE Trans. Multimed., vol. 20, no. 8, pp. 1919–1931, 2018.

[4] T. Walton, M. Evans, D. Kirk, and F. Melchior, “Exploring object-based content adaptation for mobile audio,” Pers. Ubiquitous Comput., vol. 22, no. 4, pp. 707–720, 2018.

[5] M. Evans et al., “Creating object-based experiences in the real world,” SMPTE Motion Imaging J., vol. 126, no. 6, 2017.

[6] P. Coleman, A. Franck, P. J. B. Jackson, R. J. Hughes, L. Remaggi, and F. Melchior, “Object-based reverberation for spatial audio,” in AES: Journal of the Audio Engineering Society, 2017, vol. 65, no. 1–2, pp. 66–77.

[7] J. Woodcock, W. J. Davies, T. J. Cox, and F. Melchior, “Categorization of Broadcast Audio Objects in Complex Auditory Scenes,” AES J. Audio Eng. Soc., vol. 64, no. 6, pp. 380–394, 2016.

[8] S. Spors, H. Wierstorf, A. Raake, F. Melchior, M. Frank, and F. Zotter, “Spatial sound with loudspeakers and its perception: A review of the current state,” Proceedings of the IEEE, vol. 101, no. 9. pp. 1920–1938, 2013.

[9] F. Melchior, A. Churnside, and S. Spors, “Emerging Technology Trends in Spatial Audio,” SMPTE Motion Imaging J., vol. 121, no. 6, pp. 95–100, 2012.

[10] F. Melchior and T. Sporer, “3D-Audio: Ein universelles objektbasiertes Audioformat,” FKT, no. 4, pp. 156–159, 2011.

[11] F. Melchior, “Wave Field Synthesis and Object-Based Mixing for Motion Picture Sound,” SMPTE Motion Imaging J., vol. 119, no. 3, pp. 53–57, 2010.

Konferenzbeiträge (Auswahl)

[12] J. Francombe et al., “Media device orchestration for immersive spatial audio reproduction,” in ACM International Conference Proceeding Series, 2017, vol. Part F1319.

[13] T. Walton, M. Evans, and F. Melchior, “Combining preference ratings with sensory profiling for the comparison of audio reproduction systems,” in Audio Engineering Society 142nd Convention, 2017, pp. 1–10.

[14] J. Francombe et al., “Media Device Orchestration for Immersive Spatial Audio Reproduction,” in 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences - AM ’17, 2017, pp. 1–5.

[15] J. Woodcock, C. Pike, F. Melchior, P. Coleman, A. Franck, and A. Hilton, “Presenting the S3A Object-Based Audio Drama Dataset,” in Audio Engineering Society 140th Convention, 2016.

[16] T. Walton, M. Evans, D. Kirk, and F. Melchior, “Does environmental noise influence preference of background-foreground audio balance?,” in Audio Engineering Society 141st Convention, 2016, pp. 1–10.

[17] C. Pike, F. Melchior, and A. Tew, “Descriptive analysis of binaural rendering with virtual loudspeakers using a rate-all-that-apply approach,” in Audio Engineering Society Conference on Headphone Technology, 2016, vol. 2016-August pp. 1–8.

[18] M. Paradis, C. Pike, R. Day, and F. Melchior, “A Novel Approach to Streaming and Client Side Rendering of Multichannel Audio with Synchronised Metadata,” in 2nd Web Audio Conference (WAC-2016), 2016, vol. 2076, p. 2076.

[19] N. Zacharov, C. Pike, F. Melchior, and T. Worch, “Next generation audio system assessment using the multiple stimulus ideal profile method,” in 2016 8th International Conference on Quality of Multimedia Experience, QoMEX 2016, 2016.

[20] T. Walton, M. Evans, D. Kirk, and F. Melchior, “A Subjective Comparison of Discrete Surround Sound and Soundbar Technology by Using Mixed Methods,” in Audio Engineering Society 140th Convention, 2016.

[21] C. Pike, R. Taylor, T. Parnell, and F. Melchior, “Object-based Spatial Audio Production for Virtual Reality using the Audio Definition Model,” Proc. AES Int. Conf. Audio Augment. Virtual Real., vol. 2016–September, pp. 1–7, 2016.

[22] P. Coleman, A. Franck, P. Jackson, R. Hughes, L. Remaggi, and F. Melchior, “On Object-Based Audio with Reverberation,” in Audio Engineering Society Conference: 60th International Conference: DREAMS (Dereverberation and Reverberation of Audio, Music, and Speech), 2016.

[23] A. Mason, N. Jillings, Z. Ma, J. D. J. D. Reiss, and F. Melchior, “Adaptive Audio Reproduction Using Personalized Compression,” in Audio Engineering Society Conference: 57th International Conference: The Future of Audio Entertainment Technology – Cinema, Television and the Internet, 2015.

[24] J. Francombe, T. Brookes, R. Mason, and F. Melchior, “Loudness matching multichannel audio programme material with listeners and predictive models,” in Audio Engineering Society 139th Convention, 2015.

[25] M. P. Cousins, F. M. Fazi, S. Bleeck, F. Melchior, and A. Mason, “Subjective diffuseness in layer-based loudspeaker systems with height,” in Audio Engineering Society 139th Convention, 2015.

[26] M. P. Cousins, F. M. Fazi, S. Bleeck, and F. Melchior, “Maximising Perceived Diffuseness in Loudspeaker Systems with Height Using Optimised Relative Loudspeaker Levels,”Institute of Acoustics: Reproduced Sound 2015 Institute of Acoustics: Reproduced Sound 2015. 10 - 12 Nov 2015

[27] A. Franck, F. M. Fazi, and F. Melchior, “Optimization-based reproduction of diffuse audio objects,” in 2015 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, WASPAA 2015, 2015.

[28] M. Paradis, R. Gregory-Clarke, and F. Melchior, “VenueExplorer , Object-Based Interactive Audio for Live Events,” in Proceedings of the 1st Web Audio Conference (WAC-2015), 2015.

[29] C. Pike, P. Taylour, and F. Melchior, “Delivering Object-Based 3D Audio Using The Web Audio API And The Audio Definition Model,” 1st Web Audio Conf., pp. 2–6, 2015.

[30] J. Francombe, T. Brookes, R. Mason, and F. Melchior, “Loudness Matching Multichannel Audio Program Material with Listeners and Predictive Models,” in Audio Engineering Society 139th Convention, 2015.

[31] F. Melchior, D. Marston, C. Pike, D. Satongar, and Y. W. Lam, “A Library of Binaural Room Impulse Responses and Sound Scenes for Evaluation of Spatial Audio Systems,” in 40. Deutsche Jahrestagung für Akustik - DAGA 2014, 2014.

[32] M. Shotton, C. Pike, and F. Melchior, “A Motorized Telescope Mount as A Computer-Controlled Rotational Platform for Dummy Head Measurements,” in Audio Engineering Society 136th Convention, 2014.

[33] A. Churnside et al., “Object-based broadcasting - curation, responsiveness and user experience,” in International Broadcasting Convention (IBC) 2014 Conference, 2014, p. 12.2-12.2.

[34] C. Pike, F. Melchior, and T. Tew, “Assessing the Plausibility of Non-Individualised Dynamic Binaural Synthesis in a Small Room,” in 55th International Conference of the Audio Engineering Society on Spatial Audio, 2014, vol. 2014–Januar, pp. 1–8.

[35] C. Pike and F. Melchior, “An assessment of virtual surround sound systems for headphone listening of 5.1 multichannel audio,” in Audio Engineering Society 134th Convention, 2013, pp. 2–7.

[36] C. B. M. G. S. Melchior Frank; Pike, “On the Use of a Haptic Feedback Device for Sound Source Control in Spatial Audio Systems,” in Audio Engineering Society 134th Convention, 2013.

[37] M. Mann, A. Churnside, A. Bonney, and F. Melchior, “Object-based audio applied to football broadcasts,” in Proceedings of the 2013 ACM international workshop on Immersive media experiences - ImmersiveMe ’13, 2013, pp. 13–16.

[38] F. Melchior, “The Reactive Source: A Reproduction Format Agnostic and Adaptive Spatial Audio Effect,” in Audio Engineering Society 133rd Convention, 2012.

[39] F. Melchior, S. Mauer, and M. Dausel, “Design and integration of a 3D WFS System in a cinema environment including ceiling speakers - a case study,” in 38. Deutsche Jahrestagung für Akustik - DAGA 2012, 2012.

[40] F. Melchior, “The reactive source: A reproduction format agnostic and adaptive spatial audio effect,” in 15th Int. Conference on Digital Audio Effects (DAFx-12), 2012, pp. 15–18.

[41] F. Melchior, C. Sladeczek, A. Partzsch, and S. Brix, “Design and implementation of an interactive room simulation for wave field synthesis,” in AES International Conference, 2011.

[42] F. Melchior, L. Altmann, A. Neidhardt, S. Mauer, and J.-R. Menzinger, “A Novel Framework for Simulation and Auralization of Massive Mul- tichannel Systems,” in 37. Deutsche Jahrestagung für Akustik - DAGA 2011, 2011.

[43] F. Melchior, U. Heusinger, and J. Liebetrau, “Perceptual evaluation of a spatial audio algorithm based on wave field synthesis using a reduced number of loudspeakers,” in Audio Engineering Society 131st Convention, 2011.

[44] S. Mauer and F. Melchior, “Design and Realization of a Reference Loudspeaker Panel for Wave Field Synthesis,” in Audio Engineering Society 130th Convention, 2011.

[45] F. Melchior, F. Gries, U. Heusinger, and J. Liebetrau, “Untersuchungen zur Wahrnehmung früher Deckenreflexionen,” in 36. Deutsche Jahrestagung für Akustik - DAGA 2010, 2010.

[46] F. Melchior, C. Sladeczek, A. Partzsch, and S. Brix, “Design and Implementation of an Interactive Room Simulation for Wave Field Synthesis,” in Audio Engineering Society Conference: 40th International Conference: Spatial Audio: Sense the Sound of Space, 2010.

[47] F. Melchior, K. Zheng, and D. de Vries, “Spherical Array Systems - On the Effect of Measurement Errors in Terms of Perceived Auralization Quality,” in NAG/DAGA 2009 International Conference on Acoustics, Rotterdam, 2009.

[48] F. Melchior, O. Thiergart, G. Del, G. Diemer, and S. Brix, “Dual radius spherical cardioid microphone arrays for binaural auralization,” Audio Engineering Society 127th Convention, 2009.

[49] F. Melchior and G. Gatzsche, “Spatial Audio Authoring and Rendering: Forward Research Through Exchange,” in ICMC, Belfast, 2008, pp. 2–3.

[50] F. Melchior, C. Sladeczek, D. De Vries, and B. Fröhlich, “User-dependent optimization of wave field synthesis reproduction for directive sound fields,” in Audio Engineering Society 124th Convention, 2008.

[51] F. Melchior and F. Walter, “On the Measurement of Electro Acoustic Enhanced Sound Fields,” in Audio Engineering Society 124th Convention, 2008.

[52] F. Melchior and D. de Vries, “On the visualization and modification of room impulse responses for sound design,” in 34. Deutsche Jahrestagung für Akustik - DAGA 2008, 2008.

[53] F. Melchior and D. de Vries, “On the perception of reflections from directive sources in binaural simulations,” in 34. Deutsche Jahrestagung für Akustik - DAGA 2008, 2008.

[54] J. P. Springer, C. Sladeczek, F. Melchior, M. Scheffler, B. Fröhlich, and J. Hochstrate, “Combining wave field synthesis and multi-viewer stereo displays,” in IEEE Virtual Reality, 2006, vol. 2006, p. 32.

[55] F. Melchior, G. Gatzsche, M. Strauss, K. Reichelt, M. Dausel, and J. Deguara, “Universal System for Spatial Sound Reinforcement in Theatres and Large Venues - System Design and User Interface,” Audio Engineering Society 120th Convention, 2006.

[56] D. Oschlies, B. Albrecht, F. Melchior, and D. de Vries, “Simulationsumgebung für Mikrofonarrays in der Musikaufnahme,” in 32. Deutsche Jahrestagung für Akustik DAGA 2006, 2006.

[57] D. de Vries, J.-O. Fischer, and F. Melchior, “Audiovisual Perception using Wave Field Synthesis in Combination with Augmented Reality Systems: Horizontal Positioning,” in Audio Engineering Society Conference: 28th International Conference: The Future of Audio Technology--Surround and Beyond, 2006.

[58] F. Melchior, J. Langhammer, and D. De Vries, “A new approach for direct interaction with graphical representations of room impulse responses for the use in wave field synthesis reproduction,” in Audio Engineering Society 120th Convention, 2006, vol. 4.

[59] F. Melchior, T. Laubach, and D. De Vries, “Authoring and user interaction for the production of wave field synthesis content in an augmented reality system,” in Fourth IEEE and ACM International Symposium on Symposium on Mixed and Augmented Reality, ISMAR 2005, 2005, vol. 2005, pp. 48–51.

[60] B. Albrecht et al., “An Approach for Multichannel Recording and Reproduction of Sound Source Directivity,” in Audio Engineering Society 119th Convention, 2005.

[61] R. Jacques, B. Albrecht, H.-P. Schade, D. Vries, and F. Melchior, “Multichannel Source Directivity Recording in Anechoic Chamber and in a Studio,” Forum Acusticum 2005, vol. 1952, pp. 479–484, 2005.

[62] F. Melchior, D. de Vries, and S. Brix, “Zur Kombination von Wellenfeldsynthese mit monoskopischer und stereoskopischer Bildwiedergabe,” in 31. Deutsche Jahrestagung für Akustik - DAGA 2005, 2005.

[63] M. Strauss, A. Wagner, A. Walther, and F. Melchior, “Generation of Highly Immersive Atmospheres for Wave Field Synthesis Reproduction,” in Audio Engineering Society 116th Convention, 2004.

[64] S. Brix, F. Melchior, T. Röder, S. Wabnik, and C. Riegel, “Authoring Systems for Wave Field Synthesis Content Production,” in Audio Engineering Society 115th Convention, 2003.

[65] F. Melchior, S. Brix, T. Sporer, T. Roder, and B. Klehs, “Wave Field Syntheses in Combination with 2D Video Projection,” in Audio Engineering Society Conference: 24th International Conference: Multichannel Audio, The New Reality, 2003.

[66] T. Walton, M. Evans, F. Melchior, and D. Kirk, “Combining preference ratings with sensory profiling for the comparison of audio reproduction systems,” 2017.

[67] T. Walton, M. Evans, F. Melchior, and D. Kirk, “Does environmental noise influence preference of background-foreground audio balance?,” 2016.

[68] M. Paradis, R. Gregory-clarke, and F. Melchior, “VenueExplorer, Object-Based Interactive Audio for Live Events,” 2015.

[69] T. Walton, M. Evans, F. Melchior, and D. Kirk, “A subjective comparison of discrete surround sound and soundbar technology by using mixed methods,” 2015.

[70] A. Mason, N. Jillings, Z. Ma, J. Reiss, and F. Melchior, “Adaptive audio reproduction using personalised compression,” 2015.

[71] C. Pike and F. Melchior, “An Assessment of Virtual Surround Sound Systems for Headphone Listening of 5.1 Multichannel Audio,” 2013.

[72] M. Mann, A. Churnside, A. Bonney, and F. Melchior, “Object-Based Audio Applied to Football Broadcasts The 5 live Football Experiment,” 2013.

[73] F. Melchior, C. Pike, B. Matthew, and S. Grace, “On the use of a Haptic Feedback Device for Sound Source Control in Spatial Audio Systems,” 2013.

 

Projekte

ORPHEUS | Object-based broadcasting for European leadership in next generation audio experiences

Dez. 2015 – Mai 2018 | (Link zu ORPHEUS)

Object-based media is a revolutionary approach for creating and deploying interactive, personalised, scalable and immersive content, by representing it as a set of individual assets together with metadata describing their relationships and associations. This allows media objects to be assembled in groundbreaking ways to create new user experiences.
The consortium partners will lay the foundation for facilitating infinite combinations of audio objects in ways that are flexible and responsive to user, environmental and platform-specific factors. This includes innovative tools for capturing, mixing, monitoring, storing, archiving, playing out, distributing and rendering object-based audio.  ORPHEUS will deliver a sustainable solution, ensuring that workflows and components for object-based audio scale up to enable cost-effective commercial production, storage, re-purposing, play-out and distribution.
ORPHEUS will demonstrate the new prodigious user experience through the realisation of close-to-market workflows, proofing the economic viability of object-based audio as an emerging media and broadcast technology. ORPHEUS will publish a reference architecture and guidelines on how to implement object- based audio chains.

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 10.0px 'Helvetica Neue'; color: #000000} span.s1 {font-kerning: none}

Projektpartner:  Fraunhofer IIS, Eurescom, BBC R&D, Institut für Rundfunktechnik, Elephantcandy, Trinnov Audio, B-Com, IRCAM, Bayrischer Rundfunk, Magix


ICoSOLE | Immersive Coverage of Spatially Outspread Live Events

Okt. 2013 – Sep. 2016  | (Link zu ICoSOLE)

Live events such as music festivals or triathlons extend over large areas. Due to economic limitations the coverage of such events by TV stations can only focus on preselected parts, which often come out not to be the most interesting ones. On the other hand several new video camera types and personal devices equipped with good quality cameras and audio recording capabilities are available. ICoSOLE addresses the need for better covering events by combining traditional broadcast technology with new video and audio recording devices as well as personal devices. The project will investigate how the content from different devices can be combined, edited and provided to the viewers by means of traditional broadcast technology but also with web based technology. This will give attendees of the event, but to a bigger extend remote viewers, more choices when watching the event as well as the opportunity to consume the most interesting events in a flexible way.1

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 10.0px 'Helvetica Neue'; color: #000000} p.p2 {margin: 0.0px 0.0px 12.0px 0.0px; line-height: 14.0px; font: 10.0px 'Helvetica Neue'; color: #000000} span.s1 {font-kerning: none} span.s2 {font: 6.7px 'Helvetica Neue'; font-kerning: none}

Projektpartner: Joanneum Research, Technicolor, VRT Research & Innovation, iMinds, Bitmovin, BBC R&D, Tools at Work


FAST | Fusing Audio and Semantic Technologies for Intelligent Music Production and Consumption

Juni 2014 – Juni 2019 | (Link zu FAST)

This five-year EPSRC project brings the very latest technologies to bear on the entire recorded music industry, end-to-end, producer to consumer, making the production process more fruitful, the consumption process more engaging, and the delivery and intermediation more automated and robust. It addresses three main premises: (i) that Semantic Web technologies should be deployed throughout the content value chain from producer to consumer; (ii) that advanced signal processing should be employed in the content production phases to extract “pure” features of perceptual significance and represent these in standard vocabularies; (iii) that this combination of semantic technologies and content-derived metadata leads to advantages (and new products and services) at many points in the value chain, from recording studio to end-user (listener) devices and applications.1

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 10.0px 'Helvetica Neue'; color: #000000} span.s1 {font-kerning: none} span.s2 {font: 6.7px 'Helvetica Neue'; font-kerning: none}

Projektpartner:  Queen Mary University, University of Nottingham, Oxford e-Research Centre, Abbey Road Red, Internet Archive, BBC R&D, Audio Labs Erlangen, Solid State Logic


S3A | Future Spatial Audio for an Immersive Listener Experience at Home

Dez. 2013 – Juni 2019 | (Link zu S3A)

S3A is a five-year UK research collaboration between internationally leading experts in 3D audio and visual processing. The partnership aims to unlock the creative potential of 3D sound to provide immersive experiences to the general public at home or on the move. S3A will pioneer a radical new listener centred approach to 3D sound production that can dynamically adapt to the listeners’ environment and location to create a sense of immersion. Current 3D sound systems rely upon fixed loudspeaker arrangements and acoustically treated rooms that are not practical for home use. S3A will change the way audio is produced and delivered to enable practical high-quality 3D sound reproduction based on listener perception.1

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 10.0px 'Helvetica Neue'; color: #000000} span.s1 {font-kerning: none} span.s2 {font: 6.7px 'Helvetica Neue'; font-kerning: none}

Projektpartner: Salford University, University of Surrey, University of Southampton, BBC R&D

 

 

Abschlussarbeiten

Eine Auswahl der von mir betreuten Abschlußarbeiten:

Masterarbeiten 2021

J. Kieser, Entwicklung und Evaluation eines Prototyps zur Integration von 3D-Audio in einem virtuellen Wavetable Synthesizer (ausgezeichnet bei der AES Student Design Competition)

M. Remy, Mikrofonaufnahmetechnik für Atmosphären als Sounddesign-Komponente immersiver Audioproduktionen PDF

M. Ehrhard, Entwurf und prototypische Implementierung eines Systems zur binauralen Simulation von Regieräumen

Masterarbeiten 2020

L. Hofmann, Konzeption und Erprobung eines Stützmikrofonverfahrens zur ausgedehnten Abbildung akustischer Instrumente in mehrdimensionalen Audiomischungen PDF

B. Kilper, Chancen und Herausforderungen der binauralen Audiotechnik für auditive Medien PDF

D. Rieger, Objektbasierte Musikproduktion, in Zusammenarbeit mit dem Fraunhofer IIS Erlangen (ausgezeichnet mit dem ARD/ZDF Förderpreis Frauen und Medientechnologie)

Bachelorarbeiten 2021

A. Morgner, Systemdesign für mobiles Livestreaming mit IP-basiertem Workflow und binauralem Audio am Fallbeispiel Stayin' Live Stuttgart PDF

Bachelorarbeiten 2020

C. Feeß, Interaktive Granularsynthese von Audiosignalen durch Simulation von partikelbasierten Schwarmverhalten