Web Audio Api:通过套接字从 nodejs 服务器播放数据块的正确方法
Web Audio Api: Proper way to play data chunks from a nodejs server via socket
我正在使用以下代码解码来自 nodejs 套接字的音频块
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];
var nextTime = 0;
client.on('stream', function(stream, meta){
stream.on('data', function(data) {
context.decodeAudioData(data, function(buffer) {
audioStack.push(buffer);
if ((init!=0) || (audioStack.length > 10)) { // make sure we put at least 10 chunks in the buffer before starting
init++;
scheduleBuffers();
}
}, function(err) {
console.log("err(decodeAudioData): "+err);
});
});
});
function scheduleBuffers() {
while ( audioStack.length) {
var buffer = audioStack.shift();
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
if (nextTime == 0)
nextTime = context.currentTime + 0.05; /// add 50ms latency to work well across systems - tune this if you like
source.start(nextTime);
nextTime+=source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
};
}
但它有一些 gaps/glitches 音频块之间我无法弄清楚。
我还读到,使用 MediaSource 可以os做同样的事情并且让播放器处理时间而不是手动处理。有人可以提供处理 mp3 数据的示例吗?
此外,使用网络音频处理实时流式传输的正确方法是什么API?我已经阅读了 almos 关于这个主题的所有问题 os,其中 none 似乎没有任何问题。有什么想法吗?
您可以以此代码为例:https://github.com/kmoskwiak/node-tcp-streaming-server
它基本上使用媒体源扩展。您需要做的就是从视频更改为音频
buffer = mediaSource.addSourceBuffer('audio/mpeg');
是的@Keyne 是对的,
const mediaSource = new MediaSource()
const sourceBuffer = mediaSource.addSourceBuffer('audio/mpeg')
player.src = URL.createObjectURL(mediaSource)
sourceBuffer.appendBuffer(chunk) // Repeat this for each chunk as ArrayBuffer
player.play()
但只有在您不关心 IOS 支持 (https://developer.mozilla.org/en-US/docs/Web/API/MediaSource#Browser_compatibility)
时才这样做
否则请告诉我你是怎么做到的!
我正在使用以下代码解码来自 nodejs 套接字的音频块
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];
var nextTime = 0;
client.on('stream', function(stream, meta){
stream.on('data', function(data) {
context.decodeAudioData(data, function(buffer) {
audioStack.push(buffer);
if ((init!=0) || (audioStack.length > 10)) { // make sure we put at least 10 chunks in the buffer before starting
init++;
scheduleBuffers();
}
}, function(err) {
console.log("err(decodeAudioData): "+err);
});
});
});
function scheduleBuffers() {
while ( audioStack.length) {
var buffer = audioStack.shift();
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
if (nextTime == 0)
nextTime = context.currentTime + 0.05; /// add 50ms latency to work well across systems - tune this if you like
source.start(nextTime);
nextTime+=source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
};
}
但它有一些 gaps/glitches 音频块之间我无法弄清楚。
我还读到,使用 MediaSource 可以os做同样的事情并且让播放器处理时间而不是手动处理。有人可以提供处理 mp3 数据的示例吗?
此外,使用网络音频处理实时流式传输的正确方法是什么API?我已经阅读了 almos 关于这个主题的所有问题 os,其中 none 似乎没有任何问题。有什么想法吗?
您可以以此代码为例:https://github.com/kmoskwiak/node-tcp-streaming-server
它基本上使用媒体源扩展。您需要做的就是从视频更改为音频
buffer = mediaSource.addSourceBuffer('audio/mpeg');
是的@Keyne 是对的,
const mediaSource = new MediaSource()
const sourceBuffer = mediaSource.addSourceBuffer('audio/mpeg')
player.src = URL.createObjectURL(mediaSource)
sourceBuffer.appendBuffer(chunk) // Repeat this for each chunk as ArrayBuffer
player.play()
但只有在您不关心 IOS 支持 (https://developer.mozilla.org/en-US/docs/Web/API/MediaSource#Browser_compatibility)
时才这样做否则请告诉我你是怎么做到的!