¥They are based on objects
你的代码使用一个或多个 JavaScript 对象 与 API 进行交互,这些 JavaScript 对象 充当 API 使用的数据(包含在对象属性中)以及 API 提供的功能(包含在对象方法中)的容器。
¥Your code interacts with APIs using one or more JavaScript objects, which serve as containers for the data the API uses (contained in object properties), and the functionality the API makes available (contained in object methods).
注意:如果你还不熟悉对象的工作原理,你应该在继续之前返回并完成我们的 JavaScript 对象 模块。
¥Note: If you are not already familiar with how objects work, you should go back and work through our JavaScript objects module before continuing.
让我们回到 Web Audio API 的示例 - 这是一个相当复杂的 API,由许多对象组成。最明显的是:
¥Let's return to the example of the Web Audio API — this is a fairly complex API, which consists of a number of objects. The most obvious ones are:
AudioContext,它代表可用于操纵浏览器内播放的音频的 音频图,并且具有许多可用于操纵该音频的方法和属性。
MediaElementAudioSourceNode,它代表
AudioDestinationNode,代表音频的目的地,即计算机上实际输出音频的设备 - 通常是扬声器或耳机。
那么这些对象如何相互作用呢?如果你查看我们的 简单的网络音频示例 (也看到它直播),你将首先看到以下 HTML:
¥So how do these objects interact? If you look at our simple web audio example (see it live also), you'll first see the following HTML:
html
首先,我们包含一个
¥We, first of all, include an
接下来,让我们看看这个示例的 JavaScript。
¥Next, let's look at the JavaScript for this example.
我们首先创建一个 AudioContext 实例,在其中操作我们的轨道:
¥We start by creating an AudioContext instance inside which to manipulate our track:
jsconst AudioContext = window.AudioContext || window.webkitAudioContext;
const audioCtx = new AudioContext();
接下来,我们创建存储对
¥Next, we create constants that store references to our
jsconst audioElement = document.querySelector("audio");
const playBtn = document.querySelector("button");
const volumeSlider = document.querySelector(".volume");
const audioSource = audioCtx.createMediaElementSource(audioElement);
接下来,我们包括几个事件处理程序,用于在按下按钮时在播放和暂停之间切换,并在歌曲播放完毕时将显示重置回开头:
¥Next up we include a couple of event handlers that serve to toggle between play and pause when the button is pressed and reset the display back to the beginning when the song has finished playing:
js// play/pause audio
playBtn.addEventListener("click", () => {
// check if context is in suspended state (autoplay policy)
if (audioCtx.state === "suspended") {
audioCtx.resume();
}
// if track is stopped, play it
if (playBtn.getAttribute("class") === "paused") {
audioElement.play();
playBtn.setAttribute("class", "playing");
playBtn.textContent = "Pause";
// if track is playing, stop it
} else if (playBtn.getAttribute("class") === "playing") {
audioElement.pause();
playBtn.setAttribute("class", "paused");
playBtn.textContent = "Play";
}
});
// if track ends
audioElement.addEventListener("ended", () => {
playBtn.setAttribute("class", "paused");
playBtn.textContent = "Play";
});
注意:有些人可能会注意到用于播放和暂停曲目的 play() 和 pause() 方法不是 Web Audio API 的一部分;它们是 HTMLMediaElement API 的一部分,虽然不同但密切相关。
¥Note: Some of you may notice that the play() and pause() methods being used to play and pause the track are not part of the Web Audio API; they are part of the HTMLMediaElement API, which is different but closely-related.
接下来,我们使用 AudioContext.createGain() 方法创建一个 GainNode 对象,该对象可用于调整通过它的音频音量,并创建另一个事件处理程序,每当滑块值发生更改时,该事件处理程序都会更改音频图增益(音量)的值:
¥Next, we create a GainNode object using the AudioContext.createGain() method, which can be used to adjust the volume of audio fed through it, and create another event handler that changes the value of the audio graph's gain (volume) whenever the slider value is changed:
js// volume
const gainNode = audioCtx.createGain();
volumeSlider.addEventListener("input", () => {
gainNode.gain.value = volumeSlider.value;
});
要使其正常工作,最后要做的事情是将音频图中的不同节点连接起来,这是使用每个节点类型上可用的 AudioNode.connect() 方法完成的:
¥The final thing to do to get this to work is to connect the different nodes in the audio graph up, which is done using the AudioNode.connect() method available on every node type:
jsaudioSource.connect(gainNode).connect(audioCtx.destination);
音频从源开始,然后连接到增益节点,以便可以调整音频的音量。然后,增益节点连接到目标节点,以便可以在你的计算机上播放声音(AudioContext.destination 属性代表计算机硬件(例如扬声器)上可用的默认 AudioDestinationNode。
¥The audio starts in the source, which is then connected to the gain node so the audio's volume can be adjusted. The gain node is then connected to the destination node so the sound can be played on your computer (the AudioContext.destination property represents whatever is the default AudioDestinationNode available on your computer's hardware, e.g. your speakers).