Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zh-CN: Synchronize the translation of createAnalyser #17067

Merged
merged 14 commits into from
Nov 23, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 47 additions & 43 deletions files/zh-cn/web/api/baseaudiocontext/createanalyser/index.md
Original file line number Diff line number Diff line change
@@ -1,78 +1,82 @@
---
title: AudioContext.createAnalyser()
title: BaseAudioContext:createAnalyser() 方法
short-title: createAnalyser()
slug: Web/API/BaseAudioContext/createAnalyser
---

{{ APIRef("Web Audio API") }}
{{APIRef("Web Audio API")}}

{{ domxref("AudioContext") }}的`createAnalyser()`方法能创建一个{{ domxref("AnalyserNode") }},可以用来获取音频时间和频率数据,以及实现数据可视化
{{domxref("BaseAudioContext")}} 接口的 `createAnalyser()` 方法创建一个{{domxref("AnalyserNode")}},它可以用来暴露音频时间和频率数据,以及创建数据可视化

> **备注:** 关于该节点的更多信息,请查看{{domxref("AnalyserNode")}}
> **备注:** {{domxref("AnalyserNode.AnalyserNode", "AnalyserNode()")}} 构造函数是创建 {{domxref("AnalyserNode")}} 的推荐方法;请查看[创建 AudioNode](/zh-CN/docs/Web/API/AudioNode#创建_audionode)。

> **备注:** 有关使用此节点的更多信息,请查看 {{domxref("AnalyserNode")}} 页面。

## 语法

```js
var audioCtx = new AudioContext();
var analyser = audioCtx.createAnalyser();
```js-nolint
createAnalyser()
```

### 参数

无。

### 返回值

{{domxref("AnalyserNode")}}对象
一个 {{domxref("AnalyserNode")}} 对象。

## 例子
## 示例

下面的例子展示了 AudioContext 创建分析器节点的基本用法,然后用 requestAnimationFrame() 来反复获取时域数据,并绘制出当前音频输入的“示波器风格”输出。更多完整例子请查看[Voice-change-O-matic](https://mdn.github.io/voice-change-o-matic/) demo (中[app.js 的 128–205 行](https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205)代码)
下面的示例展示了 AudioContext 创建分析器节点的基本用法,然后用 requestAnimationFrame() 来反复获取时域数据,并绘制出当前音频输入的“示波器风格”输出。更多完整示例/信息请查看 [Voice-change-O-matic](https://mdn.github.io/voice-change-o-matic/) 实例(参阅 [app.js 的 108-193 行](https://github.com/mdn/webaudio-examples/tree/main/voice-change-o-matic/scripts/app.js#L108-L193)代码)。

```js
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioCtx.createAnalyser();
const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
const analyser = audioCtx.createAnalyser();

...
// …

analyser.fftSize = 2048;
var bufferLength = analyser.fftSize;
var dataArray = new Uint8Array(bufferLength);
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
analyser.getByteTimeDomainData(dataArray);

// draw an oscilloscope of the current audio source
// 绘制当前音频源的波形图

function draw() {
drawVisual = requestAnimationFrame(draw);

drawVisual = requestAnimationFrame(draw);

analyser.getByteTimeDomainData(dataArray);

canvasCtx.fillStyle = 'rgb(200, 200, 200)';
canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
analyser.getByteTimeDomainData(dataArray);

canvasCtx.lineWidth = 2;
canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
canvasCtx.fillStyle = "rgb(200, 200, 200)";
canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);

canvasCtx.beginPath();
canvasCtx.lineWidth = 2;
canvasCtx.strokeStyle = "rgb(0, 0, 0)";

var sliceWidth = WIDTH * 1.0 / bufferLength;
var x = 0;
canvasCtx.beginPath();

for(var i = 0; i < bufferLength; i++) {
const sliceWidth = (WIDTH * 1.0) / bufferLength;
let x = 0;

var v = dataArray[i] / 128.0;
var y = v * HEIGHT/2;
for (let i = 0; i < bufferLength; i++) {
const v = dataArray[i] / 128.0;
const y = (v * HEIGHT) / 2;

if(i === 0) {
canvasCtx.moveTo(x, y);
} else {
canvasCtx.lineTo(x, y);
}
if (i === 0) {
canvasCtx.moveTo(x, y);
} else {
canvasCtx.lineTo(x, y);
}

x += sliceWidth;
}
x += sliceWidth;
}

canvasCtx.lineTo(canvas.width, canvas.height/2);
canvasCtx.stroke();
};
canvasCtx.lineTo(canvas.width, canvas.height / 2);
canvasCtx.stroke();
}

draw();
draw();
```

## 规范
Expand All @@ -83,6 +87,6 @@ function draw() {

{{Compat}}

## 另见
## 参见

- [Using the Web Audio API](/zh-CN/docs/Web_Audio_API/Using_Web_Audio_API)
- [使用 Web Audio API](/zh-CN/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)