From 4c2ab7343da4ccb3d48649c60341b77d554be5ff Mon Sep 17 00:00:00 2001 From: Alexey Pyltsyn Date: Wed, 24 Apr 2019 00:41:59 +0300 Subject: [PATCH] Replace `Github` with `GitHub` --- .../2-async-iterators-generators/article.md | 8 ++++---- 5-network/01-fetch-basics/article.md | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/1-js/12-generators-iterators/2-async-iterators-generators/article.md b/1-js/12-generators-iterators/2-async-iterators-generators/article.md index 095bc69a..45cf7393 100644 --- a/1-js/12-generators-iterators/2-async-iterators-generators/article.md +++ b/1-js/12-generators-iterators/2-async-iterators-generators/article.md @@ -264,7 +264,7 @@ So far we've seen simple examples, to gain basic understanding. Now let's review There are many online APIs that deliver paginated data. For instance, when we need a list of users, then we can fetch it page-by-page: a request returns a pre-defined count (e.g. 100 users), and provides an URL to the next page. -The pattern is very common, it's not about users, but just about anything. For instance, Github allows to retrieve commits in the same, paginated fashion: +The pattern is very common, it's not about users, but just about anything. For instance, GitHub allows to retrieve commits in the same, paginated fashion: - We should make a request to URL in the form `https://api.github.com/repos//commits`. - It responds with a JSON of 30 commits, and also provides a link to the next page in the `Link` header. @@ -273,7 +273,7 @@ The pattern is very common, it's not about users, but just about anything. For i What we'd like to have is an iterable source of commits, so that we could use it like this: ```js -let repo = 'javascript-tutorial/en.javascript.info'; // Github repository to get commits from +let repo = 'javascript-tutorial/en.javascript.info'; // GitHub repository to get commits from for await (let commit of fetchCommits(repo)) { // process commit @@ -308,9 +308,9 @@ async function* fetchCommits(repo) { } ``` -1. We use the browser `fetch` method to download from a remote URL. It allows to supply authorization and other headers if needed, here Github requires `User-Agent`. +1. We use the browser `fetch` method to download from a remote URL. It allows to supply authorization and other headers if needed, here GitHub requires `User-Agent`. 2. The fetch result is parsed as JSON, that's again a `fetch`-specific method. -3. We can get the next page URL from the `Link` header of the response. It has a special format, so we use a regexp for that. The next page URL may look like this: `https://api.github.com/repositories/93253246/commits?page=2`, it's generatd by Github itself. +3. We can get the next page URL from the `Link` header of the response. It has a special format, so we use a regexp for that. The next page URL may look like this: `https://api.github.com/repositories/93253246/commits?page=2`, it's generatd by GitHub itself. 4. Then we yield all commits received, and when they finish -- the next `while(url)` iteration will trigger, making one more request. An example of use (shows commit authors in console): diff --git a/5-network/01-fetch-basics/article.md b/5-network/01-fetch-basics/article.md index 7d56ab2c..783c3996 100644 --- a/5-network/01-fetch-basics/article.md +++ b/5-network/01-fetch-basics/article.md @@ -54,7 +54,7 @@ To get the response body, we need to use an additional method call. - **`response.arrayBuffer()`** -- return the response as [ArrayBuffer](info:arraybuffer-binary-arrays) (pure binary data), - additionally, `response.body` is a [ReadableStream](https://streams.spec.whatwg.org/#rs-class) object, it allows to read the body chunk-by-chunk, we'll see an example later. -For instance, here we get a JSON-object with latest commits from Github: +For instance, here we get a JSON-object with latest commits from GitHub: ```js run async let response = await fetch('https://api.github.com/repos/javascript-tutorial/en.javascript.info/commits');