This commit is contained in:
Ilya Kantor 2019-03-18 11:36:08 +03:00
parent 973f97cc09
commit 68d1ac109e
30 changed files with 455 additions and 260 deletions

View file

@ -126,13 +126,62 @@ Naturally, `promise` shows up first, because `setTimeout` macrotask awaits in th
So call have a promise chain that doesn't wait for anything, then things like `setTimeout` or event handlers can never get in the middle. So call have a promise chain that doesn't wait for anything, then things like `setTimeout` or event handlers can never get in the middle.
## Unhandled rejection
Remember "unhandled rejection" event from the chapter <info:promise-error-handling>?
Now, with the understanding of microtasks, we can formalize it.
**"Unhandled rejection" is when a promise error is not handled at the end of the microtask queue.**
For instance, consider this code:
```js run
let promise = Promise.reject(new Error("Promise Failed!"));
window.addEventListener('unhandledrejection', event => {
alert(event.reason); // Promise Failed!
});
```
We create a rejected `promise` and do not handle the error. So we have the "unhandled rejection" event (printed in browser console too).
We wouldn't have it if we added `.catch`, like this:
```js run
let promise = Promise.reject(new Error("Promise Failed!"));
*!*
promise.catch(err => alert('caught'));
*/!*
// no error, all quiet
window.addEventListener('unhandledrejection', event => alert(event.reason));
```
Now let's say, we'll be catching the error, but after an extremely small delay:
```js run
let promise = Promise.reject(new Error("Promise Failed!"));
*!*
setTimeout(() => promise.catch(err => alert('caught')), 0);
*/!*
// Error: Promise Failed!
window.addEventListener('unhandledrejection', event => alert(event.reason));
```
Now the unhandled rejction appears again. Why? Because `unhandledrejection` triggers when the microtask queue is complete. The engine examines promises and, if any of them is in "rejected" state, then the event is generated.
In the example above `setTimeout` adds the `.catch`, and it triggers too, of course it does, but later, after the event has already occured.
## Summary ## Summary
- Promise handling is always asynchronous, as all promise actions pass through the internal "promise jobs" queue, also called "microtask queue" (v8 term). - Promise handling is always asynchronous, as all promise actions pass through the internal "promise jobs" queue, also called "microtask queue" (v8 term).
**So, `.then/catch/finally` is called after the current code is finished.** **So, `.then/catch/finally` is called after the current code is finished.**
If we need to guarantee that a piece of code is executed after `.then/catch/finally`, it's best to add it into a chained `.then` call. If we need to guarantee that a piece of code is executed after `.then/catch/finally`, it's best to add it into a chained `.then` call.
- There's also a "macrotask queue" that keeps various events, network operation results, `setTimeout`-scheduled calls, and so on. These are also called "macrotasks" (v8 term). - There's also a "macrotask queue" that keeps various events, network operation results, `setTimeout`-scheduled calls, and so on. These are also called "macrotasks" (v8 term).

View file

Before

Width:  |  Height:  |  Size: 3.2 KiB

After

Width:  |  Height:  |  Size: 3.2 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 6.8 KiB

After

Width:  |  Height:  |  Size: 6.8 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 3.2 KiB

After

Width:  |  Height:  |  Size: 3.2 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 6.7 KiB

After

Width:  |  Height:  |  Size: 6.7 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Before After
Before After

View file

@ -1,4 +1,4 @@
# ArrayBuffer and views # ArrayBuffer, binary arrays
Binary data appears when we work with arbitrary files (uploading, downloading, creation). Or when we want to do image/audio processing. Binary data appears when we work with arbitrary files (uploading, downloading, creation). Or when we want to do image/audio processing.

View file

@ -1,4 +1,4 @@
# TextDecoder, TextEncoder # TextDecoder and TextEncoder
What if the binary data is actually a string? What if the binary data is actually a string?

View file

@ -102,5 +102,19 @@ Reading methods `read*` do not generate events, but rather return the result, as
That's only inside a Web Worker though, because delays and hang-ups in Web Workers are less important, they do not affect the page. That's only inside a Web Worker though, because delays and hang-ups in Web Workers are less important, they do not affect the page.
``` ```
## Summary
It most often used to read from files, and `File` object inherit from `Blob`.
In addition to `Blob` methods and properties, `File` objects also have `fileName` and `lastModified` properties, plus the internal ability to read from filesystem. We usually get `File` objects from user input, like `<input>` or drag'n'drop.
`FileReader` objects can read from a file or a blob, in one of three formats:
- String (`readAsText`).
- `ArrayBuffer` (`readAsArrayBuffer`).
- Data url, base-64 encoded (`readAsDataURL`).
In many cases though, we don't have to read the file contents.
We can create a blob url with `URL.createObjectURL(file)` and assign it to `<a>` or `<img>`. This way the file can be downloaded or show up as an image, as a part of canvas etc.
And if we're going to send a `File` over a network, then it's also easy, as network API like `XMLHttpRequest` or `fetch` natively accepts `File` objects.

View file

@ -16,39 +16,17 @@ let promise = fetch(url, [params])
The browser starts the request right away and returns a `promise`. The browser starts the request right away and returns a `promise`.
Accepting a response is usually a two-step procedure. Accepting a response is usually a two-stage process.
**The `promise` resolves as soon as the server responded with headers.** **The `promise` resolves with an object of the built-in [Response](https://fetch.spec.whatwg.org/#response-class) class as soon as the server responds with headers.**
So we can access the headers, we know HTTP status, whether the response is successful, but don't have the body yet. So we can access the headers, we know HTTP status, whether it is successful, but don't have the body yet.
We need to wait for the response body additionally, like this:
```js run async
let response = await fetch('https://api.github.com/repos/iliakan/javascript-tutorial-en/commits');
*!*
let commits = await response.json();
*/!*
alert(commits[0].author.login);
```
Or, using pure promises:
```js run
fetch('https://api.github.com/repos/iliakan/javascript-tutorial-en/commits')
.then(response => response.json())
.then(commits => alert(commits[0].author.login));
```
A `fetch` resolves with `response` -- an object of the built-in [Response](https://fetch.spec.whatwg.org/#response-class) class.
The main response properties are: The main response properties are:
- **`ok`** -- boolean, `true` if the HTTP status code is 200-299. - **`ok`** -- boolean, `true` if the HTTP status code is 200-299.
- **`status`** -- HTTP status code. - **`status`** -- HTTP status code.
- **`headers`** -- HTTP headers, a Map-like object. - **`headers`** -- HTTP headers, a Map-like object.
## How to get headers?
We can iterate over headers the same way as over a `Map`: We can iterate over headers the same way as over a `Map`:
```js run async ```js run async
@ -67,25 +45,41 @@ if (response.ok) {
} }
``` ```
## How to get response? To get the response body, we need to use an additional method call.
`Response` allows to access the body in multiple formats, using following promises: `Response` allows to access the body in multiple formats, using following promise-based methods:
- **`json()`** -- parse as JSON object, - **`json()`** -- parse as JSON object,
- **`text()`** -- as text, - **`text()`** -- as text,
- **`formData()`** -- as formData (form/multipart encoding), - **`formData()`** -- as formData (form/multipart encoding),
- **`blob()`** -- as Blob (for binary data), - **`blob()`** -- as Blob (for binary data),
- **`arrayBuffer()`** -- as ArrayBuffer (for binary data), - **`arrayBuffer()`** -- as ArrayBuffer (for binary data)
- `response.body` is a [ReadableStream](https://streams.spec.whatwg.org/#rs-class) object, it allows to read the body chunk-by-chunk. - additionally, `response.body` is a [ReadableStream](https://streams.spec.whatwg.org/#rs-class) object, it allows to read the body chunk-by-chunk.
We already saw how to get the response as json. For instance, here we get the response as JSON:
As text: ```js run async
let response = await fetch('https://api.github.com/repos/iliakan/javascript-tutorial-en/commits');
*!*
let commits = await response.json();
*/!*
alert(commits[0].author.login);
```
Or, using pure promises:
```js run
fetch('https://api.github.com/repos/iliakan/javascript-tutorial-en/commits')
.then(response => response.json())
.then(commits => alert(commits[0].author.login));
```
To get text:
```js ```js
let text = await response.text(); let text = await response.text();
``` ```
For the binary example, let's download an image and show it: And for the binary example, let's fetch and show an image (see chapter [Blob](info:blob) for details about operations on blobs):
```js async run ```js async run
let response = await fetch('/article/fetch/logo-fetch.svg'); let response = await fetch('/article/fetch/logo-fetch.svg');
@ -94,172 +88,22 @@ let response = await fetch('/article/fetch/logo-fetch.svg');
let blob = await response.blob(); // download as Blob object let blob = await response.blob(); // download as Blob object
*/!* */!*
// create <img> with it // create <img> for it
let img = document.createElement('img'); let img = document.createElement('img');
img.style = 'position:fixed;top:10px;left:10px;width:100px';
document.body.append(img);
// show it
img.src = URL.createObjectURL(blob); img.src = URL.createObjectURL(blob);
// show it for 2 seconds setTimeout(() => { // hide after two seconds
document.body.append(img); img.remove();
img.style = 'position:fixed;top:10px;left:10px;width:100px'; URL.revokeObjectURL(img.src);
setTimeout(() => img.remove(), 2000); }, 2000);
``` ```
## Fetch API in detail ```warn
Please note: we can use only one of these methods.
The second argument provides a lot of flexibility to `fetch` syntax. If we get `response.text()`, then `response.json()` won't work, as the body content has already been processed.
Here's the full list of possible options with default values (alternatives commented out):
```js
let promise = fetch(url, {
method: "GET", // POST, PUT, DELETE, etc.
headers: {
"Content-Type": "text/plain;charset=UTF-8"
},
body: undefined // string, FormData, Blob, BufferSource, or URLSearchParams
referrer: "about:client", // "" for no-referrer, or an url from the current origin
referrerPolicy: "", // no-referrer, no-referrer-when-downgrade, same-origin...
mode: "cors", // same-origin, no-cors, navigate, or websocket
credentials: "same-origin", // omit, include
cache: "default", // no-store, reload, no-cache, force-cache, or only-if-cached
redirect: "follow", // manual, error
integrity: "", // a hash, like "sha256-abcdef1234567890"
keepalive: false, // true
signal: undefined, // AbortController to abort request
window: window // null
})
```
## How to track progress?
To track download progress, we need to use `response.body`.
It's a "readable stream" - a special object that provides access chunk-by-chunk.
Here's the code to do this:
```js
const reader = response.body.getReader();
while(true) {
// done is true for the last chunk
// value is Uint8Array of bytes
const chunk = await reader.read();
if (chunk.done) {
break;
}
console.log(`Received ${chunk.value.length} bytes`)
}
```
We do the infinite loop, while `await reader.read()` returns response chunks.
A chunk has two properties:
- **`done`** -- true when the reading is complete.
- **`value`** -- a typed array of bytes: `Uint8Array`.
The full code to get response and log the progress:
```js run async
// Step 1: start the request and obtain a reader
let response = await fetch('https://api.github.com/repos/iliakan/javascript-tutorial-en/commits?per_page=100');
const reader = response.body.getReader();
// Step 2: get total length
const contentLength = +response.headers.get('Content-Length');
// Step 3: read the data
let receivedLength = 0;
let chunks = [];
while(true) {
const {done, value} = await reader.read();
if (done) {
break;
}
chunks.push(value);
receivedLength += value.length;
console.log(`Received ${receivedLength} of ${contentLength}`)
}
// Step 4: join chunks into result
let chunksAll = new Uint8Array(receivedLength); // (4.1)
let position = 0;
for(let chunk of chunks) {
chunksAll.set(chunk, position); // (4.2)
position += chunk.length;
}
// Step 5: decode
let result = new TextDecoder("utf-8").decode(chunksMerged);
let commits = JSON.parse(result);
// We're done!
alert(commits[0].author.login);
```
Let's explain that step-by-step:
1. We perform `fetch` as usual, but instead of calling `response.json()`, we obtain a stream reader `response.body.getReader()`.
Please note, we can't use both these methods to read the same response. Either use a reader or a response method to get the result.
2. Prior to reading, we can figure out the full response length by its `Content-Length` header.
It may be absent for cross-domain requests (as in the example) and, well, technically a server doesn't have to set it. But usually it's at place.
3. Now `await reader.read()` until it's done.
We gather the `chunks` in the array. That's important, because after the response is consumed, we won't be able to "re-read" it using `response.json()` or another way (you can try, there'll be an error).
4. At the end, we have `chunks` -- an array of `Uint8Array` byte chunks. We need to join them into a single result. Unfortunately, there's no single method that concatenates those.
1. We create `new Uint8Array(receivedLength)` -- a same-type array with the combined length.
2. Then use `.set(chunk, position)` method that copies each `chunk` at the given `position` (one by one) in the resulting array.
5. We have the result in `chunksAll`. It's a byte array though, not a string.
To create a string, we need to interpret these bytes. The built-in `TextEncoder` does exactly that. Then we can `JSON.parse` it.
What if it were a binary file? We could make a blob of it:
```js
let blob = new Blob([chunksAll.buffer]);
```
```js run async
let response = await fetch('https://api.github.com/repos/iliakan/javascript-tutorial-en/commits?per_page=100');
const contentLength = +response.headers.get('Content-Length');
const reader = response.body.getReader();
let receivedLength = 0;
let chunks = [];
while(true) {
const chunk = await reader.read();
if (chunk.done) {
console.log("done!");
break;
}
chunks.push(chunk.value);
receivedLength += chunk.value.length;
console.log(`${receivedLength}/${contentLength} received`)
}
let chunksMerged = new Uint8Array(receivedLength);
let length = 0;
for(let chunk of chunks) {
chunksMerged.set(chunk, length);
length += chunk.length;
}
let result = new TextDecoder("utf-8").decode(chunksMerged);
console.log(JSON.parse(result));
``` ```

View file

@ -378,7 +378,7 @@ xhr.send(json);
The `.send(body)` method is pretty omnivore. It can send almost everything, including Blob and BufferSource objects. The `.send(body)` method is pretty omnivore. It can send almost everything, including Blob and BufferSource objects.
## Tracking upload progress ## Upload progress
The `progress` event only works on the downloading stage. The `progress` event only works on the downloading stage.
@ -398,7 +398,7 @@ Here's the list:
- `timeout` -- upload timed out (if `timeout` property is set). - `timeout` -- upload timed out (if `timeout` property is set).
- `loadend` -- upload finished with either success or error. - `loadend` -- upload finished with either success or error.
Here's an example of upload tracking: Example of handlers:
```js ```js
xhr.upload.onprogress = function(event) { xhr.upload.onprogress = function(event) {
@ -414,6 +414,36 @@ xhr.upload.onerror = function() {
}; };
``` ```
Here's a real-life example: file upload with progress indication:
```html run
<input type="file" onchange="upload(this.files[0])">
<script>
function upload(file) {
let xhr = new XMLHttpRequest();
// track upload progress
*!*
xhr.upload.onprogress = function(event) {
console.log(`Uploaded ${event.loaded} of ${event.total}`);
};
*/!*
// track completion: both successful or not
xhr.onloadend = function() {
if (xhr.status == 200) {
console.log("success");
} else {
console.log("error " + this.status);
}
};
xhr.open("POST", "/article/xmlhttprequest/post/upload");
xhr.send(file);
}
</script>
```
## Summary ## Summary

View file

@ -17,9 +17,9 @@ function accept(req, res) {
chunks.push(data); chunks.push(data);
length += data.length; length += data.length;
// Too much POST data, kill the connection! // More than 10mb, kill the connection!
if (length > 1e6) { if (length > 1e8) {
request.connection.destroy(); req.connection.destroy();
} }
}); });
@ -32,6 +32,9 @@ function accept(req, res) {
} else if (req.url == '/image') { } else if (req.url == '/image') {
res.writeHead(200, { 'Content-Type': 'application/json' }); res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: "Image saved", imageSize: length })); res.end(JSON.stringify({ message: "Image saved", imageSize: length }));
} else if (req.url == '/upload') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: "Upload complete", size: length }));
} else { } else {
res.writeHead(404); res.writeHead(404);
res.end("Not found"); res.end("Not found");

View file

@ -1,27 +1,16 @@
# Cross-Origin Fetch # Cross-Origin Fetch
TODO:
Note that "Content-Length" header is not returned by default for CORS requests
==
Cache-Control
Content-Language
Content-Type
Expires
Last-Modified
Pragma
If we make a `fetch` from an arbitrary web-site, that will probably fail. If we make a `fetch` from an arbitrary web-site, that will probably fail.
Fetching from another origin (domain/port/protocol triplet) requires special headers from the remote side. The core concept here is *origin* -- a domain/port/protocol triplet.
For instance, let's try fetching from http://google.com: Cross-origin requests -- those sent to another domain or protocol or port -- require special headers from the remote side.
For instance, let's try fetching from `http://example.com`:
```js run async ```js run async
try { try {
await fetch('http://google.com'); await fetch('http://example.com');
} catch(err) { } catch(err) {
alert(err); // Failed to fetch alert(err); // Failed to fetch
} }
@ -29,27 +18,55 @@ try {
Fetch fails, as expected. Fetch fails, as expected.
## Safety control ## Why?
Cross-origin requests pass a special safety control, with the sole purpose to protect the internet from evil hackers. Cross-origin requests are subject to the special safety control with the sole purpose to protect the internet from evil hackers.
Seriously. For many years cross-domain requests were simply unavailable. The internet got used to it, people got used to it. Seriously. Let's make a very brief historical digression.
Imagine for a second that a new standard appeared, that allows any webpage to make any requests anywhere. For many years Javascript was unable to perform network requests.
The main way to send a request to another site was an HTML `<form>` with either `POST` or `GET` method. People submitted it to `<iframe>`, just to stay on the current page.
```html
<form target="iframe" method="POST" action="http://site.com">
<!-- javascript could dynamically generate and submit this kind of form -->
</form>
<iframe name="iframe"></iframe>
```
So, it *was* possible to make a request. But if the submission was to another site, then the main window was forbidden to access `<iframe>` content. Hence, it wasn't possible to get a response.
2. Open another site in `<iframe>`.
For many years cross-domain requests were simply unavailable. To access another
The internet got used to it, people got used to it.
Imagine that `fetch` works from anywhere.
How an evil hacker could use it? How an evil hacker could use it?
They would create a page at `http://evilhacker.com`, lure a user to it, and run `fetch` from his mail server, e.g. `http://gmail.com`.
They would create a page at `http://evil.com`, and when a user comes to it, then run `fetch` from his mail server, e.g. `http://gmail.com/messages`.
Such a request usually sends authentication cookies, so `http://gmail.com` would recognize the user and send back the messages. Then the hacker could analyze the mail and go with online-banking, and so on.
When cross-origin requests were finally implemented, safety restrictions were placed to prevent an evil-minded person from doing anything that they couldn't do before. In such a way, that old sites are automatically protected. ![](cors-gmail-messages.png)
No script could fetch a webpage from another site (another origin, to be precise). How `gmail.com` and other sites protect users from such hacks now?
That becomes really important, as you remember about cookies. Any request brings cookies by default, so being able to That's simple -- when a request is made from they add an additional secret value to all requests.
The protection is quite simple.
How `gmail.com`
can't access the content of a page from another site. Right, because o
Because of cross-origin restrictions such a hack is impossible. They prevent an evil-minded person from doing anything that they couldn't do before.

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

View file

@ -1,41 +1,96 @@
# Fetch API # Fetch: Track download progress
The second argument provides a lot of flexibility to `fetch` syntax. To track download progress, we need to use `response.body`.
Here's the full list of possible options with default values (alternatives commented out): It's a "readable stream" - a special object that provides access chunk-by-chunk.
Here's the code to do this:
```js ```js
let promise = fetch(url, { const reader = response.body.getReader();
method: "GET", // POST, PUT, DELETE, etc.
headers: { while(true) {
"Content-Type": "text/plain;charset=UTF-8" // done is true for the last chunk
}, // value is Uint8Array of bytes
body: undefined // string, FormData, Blob, BufferSource, or URLSearchParams const chunk = await reader.read();
referrer: "about:client", // "" for no-referrer, or an url from the current origin
referrerPolicy: "", // no-referrer, no-referrer-when-downgrade, same-origin... if (chunk.done) {
mode: "cors", // same-origin, no-cors, navigate, or websocket break;
credentials: "same-origin", // omit, include }
cache: "default", // no-store, reload, no-cache, force-cache, or only-if-cached
redirect: "follow", // manual, error console.log(`Received ${chunk.value.length} bytes`)
integrity: "", // a hash, like "sha256-abcdef1234567890" }
keepalive: false, // true
signal: undefined, // AbortController to abort request
window: window // null
})
``` ```
Not so long list actually, but quite a lot of capabilities. We do the infinite loop, while `await reader.read()` returns response chunks.
Let's explore the options one-by-one with examples. A chunk has two properties:
- **`done`** -- true when the reading is complete.
- **`value`** -- a typed array of bytes: `Uint8Array`.
## method, headers, body The full code to get response and log the progress:
These are the most widely used fields. ```js run async
// Step 1: start the request and obtain a reader
let response = await fetch('https://api.github.com/repos/iliakan/javascript-tutorial-en/commits?per_page=100');
- **`method`** -- HTTP-method, e.g. POST, const reader = response.body.getReader();
- **`headers`** -- an object with HTTP headers,
- **`body`** -- a string, or: // Step 2: get total length
- FormData object, to submit `form/multipart` const contentLength = +response.headers.get('Content-Length');
- Blob/BufferSource to send binary data
- URLSearchParams, to submit `x-www-form-urlencoded` // Step 3: read the data
let receivedLength = 0;
let chunks = [];
while(true) {
const {done, value} = await reader.read();
if (done) {
break;
}
chunks.push(value);
receivedLength += value.length;
console.log(`Received ${receivedLength} of ${contentLength}`)
}
// Step 4: join chunks into result
let chunksAll = new Uint8Array(receivedLength); // (4.1)
let position = 0;
for(let chunk of chunks) {
chunksAll.set(chunk, position); // (4.2)
position += chunk.length;
}
// Step 5: decode into a string
let result = new TextDecoder("utf-8").decode(chunksAll);
let commits = JSON.parse(result);
// We're done!
alert(commits[0].author.login);
```
Let's explain that step-by-step:
1. We perform `fetch` as usual, but instead of calling `response.json()`, we obtain a stream reader `response.body.getReader()`.
Please note, we can't use both these methods to read the same response. Either use a reader or a response method to get the result.
2. Prior to reading, we can figure out the full response length by its `Content-Length` header.
It may be absent for cross-domain requests (as in the example) and, well, technically a server doesn't have to set it. But usually it's at place.
3. Now `await reader.read()` until it's done.
We gather the `chunks` in the array. That's important, because after the response is consumed, we won't be able to "re-read" it using `response.json()` or another way (you can try, there'll be an error).
4. At the end, we have `chunks` -- an array of `Uint8Array` byte chunks. We need to join them into a single result. Unfortunately, there's no single method that concatenates those.
1. We create `new Uint8Array(receivedLength)` -- a same-type array with the combined length.
2. Then use `.set(chunk, position)` method that copies each `chunk` at the given `position` (one by one) in the resulting array.
5. We have the result in `chunksAll`. It's a byte array though, not a string.
To create a string, we need to interpret these bytes. The built-in `TextEncoder` does exactly that. Then we can `JSON.parse` it.
What if we need binary content instead of JSON? That's even simpler. Instead of steps 4 and 5, we could make a blob of all chunks:
```js
let blob = new Blob(chunks);
```

View file

@ -0,0 +1,36 @@
<!doctype html>
<script>
(async () {
const response = await fetch('long.txt');
const reader = response.body.getReader();
const contentLength = +response.headers.get('Content-Length');
let receivedLength = 0;
let chunks = [];
while(true) {
const chunk = await reader.read();
if (chunk.done) {
console.log("done!");
break;
}
chunks.push(chunk.value);
receivedLength += chunk.value.length;
console.log(`${receivedLength}/${contentLength} received`)
}
let chunksMerged = new Uint8Array(receivedLength);
let length = 0;
for(let chunk of chunks) {
chunksMerged.set(chunk, length);
length += chunk.length;
}
let result = new TextDecoder("utf-8").decode(chunksMerged);
console.log(result);
})();
</script>

View file

@ -0,0 +1,55 @@
let http = require('http');
let url = require('url');
let querystring = require('querystring');
let static = require('node-static');
let file = new static.Server('.', {
cache: 0
});
function accept(req, res) {
if (req.method == 'POST') {
let chunks = [];
let length = 0;
req.on('data', function (data) {
chunks.push(data);
length += data.length;
// Too much POST data, kill the connection!
if (length > 1e6) {
request.connection.destroy();
}
});
req.on('end', function() {
// let post = JSON.parse(chunks.join(''));
if (req.url == '/user') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: 'User saved' }));
} else if (req.url == '/image') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: "Image saved", imageSize: length }));
} else {
res.writeHead(404);
res.end("Not found");
}
});
} else {
file.serve(req, res);
}
}
// ------ запустить сервер -------
if (!module.parent) {
http.createServer(accept).listen(8080);
} else {
exports.accept = accept;
}

View file

@ -1,4 +1,96 @@
# Fetch # URL objects
[codetabs src="progress"] The built-in [URL](https://url.spec.whatwg.org/#api) class provides a convenient interface for creating and parsing URLs.
We don't have to use it at all. There are no networking methods that require exactly an `URL` object, strings are good enough. But sometimes it can be really helpful.
## Creating an URL
The syntax to create a new URL object:
```js
new URL(url, [base])
```
- **`url`** -- the text url
- **`base`** -- an optional base for the `url`
The `URL` object immediately allows us to access its components, so it's a nice way to parse the url, e.g.:
```js run
let url = new URL('https://javascript.info/url');
alert(url.protocol); // https:
alert(url.host); // javascript.info
alert(url.pathname); // /url
```
Here's the cheatsheet:
![](url-object.png)
- `href` is the full url, same as `url.toString()`
- `protocol` ends with the colon character `:`
- `search` starts with the question mark `?`
- `hash` ends with the hash character `#`
- there are also `user` and `password` properties if HTTP authentication is present.
We can also use `URL` to create relative urls, using the second argument:
```js run
let url = new URL('profile/admin', 'https://javascript.info');
alert(url); // https://javascript.info/profile/admin
url = new URL('tester', url); // go to 'tester' relative to current url path
alert(url); // https://javascript.info/profile/tester
```
```smart header="We can use `URL` everywhere instead of a string"
We can use an `URL` object in `fetch` or `XMLHttpRequest`, almost everywhere where a string url is expected.
In the vast majority of methods it's automatically converted to a string.
```
## SearchParams
Let's say we want to create an url with given search params, for instance, `https://google.com/search?query=value`.
They must be correctly encoded.
In very old browsers, before `URL` apparead, we'd use built-in functions `encodeURIComponent/decodeURIComponent`.
Now, there's no need: `url.searchParams` is an object of type [URLSearchParams](https://url.spec.whatwg.org/#urlsearchparams).
It provides convenient methods for search parameters:
- **`append(name, value)`** -- add the parameter,
- **`delete(name)`** -- remove the parameter,
- **`get(name)`** -- get the parameter,
- **`getAll(name)`** -- get all parameters with that name,
- **`has(name)`** -- check for the existance of the parameter,
- **`set(name, value)`** -- set/replace the parameter,
- **`sort()`** -- sort parameters by name, rarely needed,
- ...and also iterable, similar to `Map`.
So, `URL` object also provides an easy way to operate on url parameters.
For example:
```js run
let url = new URL('https://google.com/search');
url.searchParams.set('query', 'test me!');
alert(url); // https://google.com/search?query=test+me%21
url.searchParams.set('tbs', 'qdr:y'); // add param for date range: past year
alert(url); // https://google.com/search?query=test+me%21&tbs=qdr%3Ay
// iterate over search parameters (decoded)
for(let [name, value] of url.searchParams) {
alert(`${name}=${value}`); // query=test me!, then tbs=qdr:y
}
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.