components

This commit is contained in:
Ilya Kantor 2019-04-02 14:01:44 +03:00
parent 304d578b54
commit 6fb4aabcba
344 changed files with 669 additions and 406 deletions

View file

@ -136,7 +136,7 @@ Here's a brief list of methods to insert a node into a parent element (`parentEl
</script>
```
To insert `newLi` as the first element, we can do it like this:
```js
list.insertBefore(newLi, list.firstChild);
```
@ -335,6 +335,74 @@ An example of copying the message:
</script>
```
## DocumentFragment [#document-fragment]
`DocumentFragment` is a special DOM node that serves as a wrapper to pass around groups of nodes.
We can append other nodes to it, but when we insert it somewhere, then it "disappears", leaving its content inserted instead.
For example, `getListContent` below generates a fragment with `<li>` items, that are later inserted into `<ul>`:
```html run
<ul id="ul"></ul>
<script>
function getListContent() {
let fragment = new DocumentFragment();
for(let i=1; i<=3; i++) {
let li = document.createElement('li');
li.append(i);
fragment.append(li);
}
return fragment;
}
*!*
ul.append(getListContent()); // (*)
*/!*
</script>
```
Please note, at the last line `(*)` we append `DocumentFragment`, but it "blends in", so the resulting structure will be:
```html
<ul>
<li>1</li>
<li>2</li>
<li>3</li>
</ul>
```
`DocumentFragment` is rarely used explicitly. Why append to a special kind of node, if we can return an array of nodes instead? Rewritten example:
```html run
<ul id="ul"></ul>
<script>
function getListContent() {
let result = [];
for(let i=1; i<=3; i++) {
let li = document.createElement('li');
li.append(i);
result.push(li);
}
return result;
}
*!*
ul.append(...getListContent()); // append + "..." operator = friends!
*/!*
</script>
```
We mention `DocumentFragment` mainly because there are some concepts on top of it, like [template](info:template-element) element, that we'll cover much later.
## Removal methods
To remove nodes, there are the following methods:

View file

@ -1,426 +0,0 @@
# Cookies, document.cookie
Cookies are small strings of data that are stored directly in the browser. They are a part of HTTP protocol, defined by [RFC 6265](https://tools.ietf.org/html/rfc6265) specification.
Most of the time, cookies are set by a web server. Then they are automatically added to every request to the same domain.
One of the most widespread use cases is authentication:
1. Upon sign in, the server uses `Set-Cookie` HTTP-header in the response to set a cookie with "session identifier".
2. Next time when the request is set to the same domain, the browser sends the over the net using `Cookie` HTTP-header.
3. So the server knows who made the request.
We can also access cookies from the browser, using `document.cookie` property.
There are many tricky things about cookies and their options. In this chapter we'll cover them in detail.
## Reading from document.cookie
```online
Do you have any cookies on this site? Let's see:
```
```offline
Assuming you're on a website, it's possible to see the cookies, like this:
```
```js run
// At javascript.info, we use Google Analytics for statistics,
// so there should be some cookies
alert( document.cookie ); // cookie1=value1; cookie2=value2;...
```
The value of `document.cookie` consists of `name=value` pairs, delimited by `; `. Each one is a separate cookie.
To find a particular cookie, we can split `document.cookie` by `; `, and then find the right name. We can use either a regular expression or array functions to do that.
We leave it as an excercise for the reader. Also, at the end of the chapter you'll find helper functions to manipulate cookies.
## Writing to document.cookie
We can write to `document.cookie`. But it's not a data property, it's an accessor.
**A write operation to `document.cookie` passes through the browser that updates cookies mentioned in it, but doesn't touch other cookies.**
For instance, this call sets a cookie with the name `user` and value `John`:
```js run
document.cookie = "user=John"; // update only cookie named 'user'
alert(document.cookie); // show all cookies
```
If you run it, then probably you'll see multiple cookies. That's because `document.cookie=` operation does not overwrite all cookies. It only sets the mentioned cookie `user`.
Technically, name and value can have any characters, but to keep the formatting valid they should be escaped using a built-in `encodeURIComponent` function:
```js run
// special values, need encoding
let name = "my name";
let value = "John Smith"
// encodes the cookie as my%20name=John%20Smith
document.cookie = encodeURIComponent(name) + '=' + encodeURIComponent(value);
alert(document.cookie); // ...; my%20name=John%20Smith
```
```warn header="Limitations"
There are few limitations:
- The `name=value` pair, after `encodeURIComponent`, should not exceed 4kb. So we can't store anything huge in a cookie.
- The total number of cookies per domain is limited to around 20+, the exact limit depends on a browser.
```
Cookies have several options, many of them are important and should be set.
The options are listed after `key=value`, delimited by `;`, like this:
```js run
document.cookie = "user=John; path=/; expires=Tue, 19 Jan 2038 03:14:07 GMT"
```
## path
- **`path=/mypath`**
The url path prefix, where the cookie is accessible. Must be absolute. By default, it's the current path.
If a cookie is set with `path=/admin`, it's visible at pages `/admin` and `/admin/something`, but not at `/home` or `/adminpage`.
Usually, we set `path=/` to make the cookie accessible from all website pages.
## domain
- **`domain=site.com`**
A domain where the cookie is accessible. In practice though, there are limitations. We can't set any domain.
By default, a cookie is accessible only at the domain that set it. So, if the cookie was set by `site.com`, we won't get it `other.com`.
...But what's more tricky, we also won't get the cookie at a subdomain `forum.site.com`!
```js
// at site.com
document.cookie = "user=John"
// at forum.site.com
alert(document.cookie); // no user
```
**There's no way to let a cookie be accessible from another 2nd-level domain, so `other.com` will never receive a cookie set at `site.com`.**
It's a safety restriction, to allow us to store sensitive data in cookies.
...But if we'd like to grant access to subdomains like `forum.site.com`, that's possible. We should explicitly set `domain` option to the root domain: `domain=site.com`:
```js
// at site.com, make the cookie accessible on any subdomain:
document.cookie = "user=John; domain=site.com"
// at forum.site.com
alert(document.cookie); // with user
```
For historical reasons, `domain=.site.com` (a dot at the start) also works this way, it might better to add the dot to support very old browsers.
So, `domain` option allows to make a cookie accessible at subdomains.
## expires, max-age
By default, if a cookie doesn't have one of these options, it disappears when the browser is closed. Such cookies are called "session cookies"
To let cookies survive browser close, we can set either `expires` or `max-age` option.
- **`expires=Tue, 19 Jan 2038 03:14:07 GMT`**
Cookie expiration date, when the browser will delete it automatically.
The date must be exactly in this format, in GMT timezone. We can use `date.toUTCString` to get it. For instance, we can set the cookie to expire in 1 day:
```js
// +1 day from now
let date = new Date(Date.now() + 86400e3);
date = date.toUTCString();
document.cookie = "user=John; expires=" + date;
```
If we set `expires` to a date in the past, the cookie is deleted.
- **`max-age=3600`**
An alternative to `expires`, specifies the cookie expiration in seconds from the current moment.
If zero or negative, then the cookie is deleted:
```js
// cookie will die +1 hour from now
document.cookie = "user=John; max-age=3600";
// delete cookie (let it expire right now)
document.cookie = "user=John; max-age=0";
```
## secure
- **`secure`**
The cookie should be transferred only over HTTPS.
**By default, if we set a cookie at `http://site.com`, then it also appears at `https://site.com` and vise versa.**
That is, cookies are domain-based, they do not distinguish between the protocols.
With this option, if a cookie is set by `https://site.com`, then it doesn't appear when the same site is accessed by HTTP, as `http://site.com`. So if a cookie has sensitive content that should never be sent over unencrypted HTTP, then the flag is the right thing.
```js
// assuming we're on https:// now
// set the cookie secure (only accessible if over HTTPS)
document.cookie = "user=John; secure";
```
## samesite
That's another security option, to protect from so-called XSRF (cross-site request forgery) attacks.
To understand when it's useful, let's introduce the following attack scenario.
### XSRF attack
Imagine, you are logged into the site `bank.com`. That is: you have an authentication cookie from that site. Your browser sends it to `bank.com` with every request, so that it recognizes you and performs all sensitive financial operations.
Now, while browsing the web in another window, you occasionally come to another site `evil.com`, that automatically submits a form `<form action="https://bank.com/pay">` to `bank.com` with input fields that initiate a transaction to the hacker's account.
The form is submitted from `evil.com` directly to the bank site, and your cookie is also sent, just because it's sent every time you visit `bank.com`. So the bank recognizes you and actually performs the payment.
![](cookie-xsrf.png)
That's called a cross-site request forgery (or XSRF) attack.
Real banks are protected from it of course. All forms generated by `bank.com` have a special field, so called "xsrf protection token", that an evil page can't neither generate, nor somehow extract from a remote page (it can submit a form there, but can't get the data back).
But that takes time to implement: we need to ensure that every form has the token field, and we must also check all requests.
### Enter cookie samesite option
The cookie `samesite` option provides another way to protect from such attacks, that (in theory) should not require "xsrf protection tokens".
It has two possible values:
- **`samesite=strict` (same as `samesite` without value)**
A cookie with `samesite=strict` is never sent if the user comes from outside the site.
In other words, whether a user follows a link from their mail or submits a form from `evil.com`, or does any operation that originates from another domain, the cookie is not sent.
If authentication cookies have `samesite` option, then XSRF attack has no chances to succeed, because a submission from `evil.com` comes without cookies. So `bank.com` will not recognize the user and will not proceed with the payment.
The protection is quite reliable. Only operations that come from `bank.com` will send the `samesite` cookie.
Although, there's a small inconvenience.
When a user follows a legitimate link to `bank.com`, like from their own notes, they'll be surprised that `bank.com` does not recognize them. Indeed, `samesite=strict` cookies are not sent in that case.
We could work around that by using two cookies: one for "general recognition", only for the purposes of saying: "Hello, John", and the other one for data-changing operations with `samesite=strict`. Then a person coming from outside of the site will see a welcome, but payments must be initiated from the bank website.
- **`samesite=lax`**
A more relaxed approach that also protects from XSRF and doesn't break user experience.
Lax mode, just like `strict`, forbids the browser to send cookies when coming from outside the site, but adds an exception.
A `samesite=lax` cookie is sent if both of these conditions are true:
1. The HTTP method is "safe" (e.g. GET, but not POST).
The full list safe of HTTP methods is in the [RFC7231 specification](https://tools.ietf.org/html/rfc7231). Basically, these are the methods that should be used for reading, but not writing the data. They must not perform any data-changing operations. Following a link is always GET, the safe method.
2. The operation performs top-level navigation (changes URL in the browser address bar).
That's usually true, but if the navigation is performed in an `<iframe>`, then it's not top-level. Also, AJAX requests do not perform any navigation, hence they don't fit.
So, what `samesite=lax` does is basically allows a most common "go to URL" operation to have cookies. E.g. opening a website link from notes satisfies these conditions.
But anything more complicated, like AJAX request from another site or a form submittion loses cookies.
If that's fine for you, then adding `samesite=lax` will probably not break the user experience and add protection.
Overall, `samesite` is a great option, but it has an important drawback:
- `samesite` is ignored (not supported) by old browsers, year 2017 or so.
**So if we solely rely on `samesite` to provide protection, then old browsers will be vulnerable.**
But we surely can use `samesite` together with other protection measures, like xsrf tokens, to add an additional layer of defence and then, in the future, when old browsers die out, we'll probably be able to drop xsrf tokens.
## httpOnly
This option has nothing to do with Javascript, but we have to mention it for completeness.
The web-server uses `Set-Cookie` header to set a cookie. And it may set the `httpOnly` option.
This option forbids any JavaScript access to the cookie. We can't see such cookie or manipulate it using `document.cookie`.
That's used as a precaution measure, to protect from certain attacks when a hacker injects his own Javascript code into a page and waits for a user to visit that page. That shouldn't be possible at all, a hacker should not be able to inject their code into our site, but there may be bugs that let hackers do it.
Normally, if such thing happens, and a user visits a web-page with hacker's code, then that code executes and gains access to `document.cookie` with user cookies containing authentication information. That's bad.
But if a cookie is `httpOnly`, then `document.cookie` doesn't see it, so it is protected.
## Appendix: Cookie functions
Here's a small set of functions to work with cookies, more convenient than a manual modification of `document.cookie`.
There exist many cookie libraries for that, so these are for demo purposes. Fully working though.
### getCookie(name)
The shortest way to access cookie is to use a [regular expression](info:regular-expressions).
The function `getCookie(name)` returns the cookie with the given `name`:
```js
// returns the cookie with the given name,
// or undefined if not found
function getCookie(name) {
let matches = document.cookie.match(new RegExp(
"(?:^|; )" + name.replace(/([\.$?*|{}\(\)\[\]\\\/\+^])/g, '\\$1') + "=([^;]*)"
));
return matches ? decodeURIComponent(matches[1]) : undefined;
}
```
Here `new RegExp` is generated dynamically, to match `; name=<value>`.
Please note that a cookie value is encoded, so `getCookie` uses a built-in `decodeURIComponent` function to decode it.
### setCookie(name, value, options)
Sets the cookie `name` to the given `value` with `path=/` by default (can be modified to add other defaults):
```js run
function setCookie(name, value, options = {}) {
options = {
path: '/',
// add other defaults here if necessary
...options
};
if (options.expires.toUTCString) {
options.expires = options.expires.toUTCString();
}
let updatedCookie = encodeURIComponent(name) + "=" + encodeURIComponent(value);
for (let optionKey in options) {
updatedCookie += "; " + optionKey;
let optionValue = options[optionKey];
if (optionValue !== true) {
updatedCookie += "=" + optionValue;
}
}
document.cookie = updatedCookie;
}
// Example of use:
setCookie('user', 'John', {secure: true, 'max-age': 3600});
```
### deleteCookie(name)
To delete a cookie, we can call it with a negative expiration date:
```js
function deleteCookie(name) {
setCookie(name, "", {
'max-age': -1
})
}
```
```warn header="Updating or deleting must use same path and domain"
Please note: when we update or delete a cookie, we should use exactly the same path and domain options as when we set it.
```
Together: [cookie.js](cookie.js).
## Appendix: Third-party cookies
A cookie is called "third-party" if it's placed by domain other than the user is visiting.
For instance:
1. A page at `site.com` loads a banner from another site: `<img src="https://ads.com/banner.png">`.
2. Along with the banner, the remote server at `ads.com` may set `Set-Cookie` header with cookie like `id=1234`. Such cookie originates from `ads.com` domain, and will only be visible at `ads.com`:
![](cookie-third-party.png)
3. Next time when `ads.com` is accessed, the remote server gets the `id` cookie and recognizes the user:
![](cookie-third-party-2.png)
4. What's even more important, when the users moves from `site.com` to another site `other.com` that also has a banner, then `ads.com` gets the cookie, as it belongs to `ads.com`, thus recognizing the visitor and tracking him as he moves between sites:
![](cookie-third-party-3.png)
Third-party cookies are traditionally used for tracking and ads services, due to their nature. They are bound to the originating domain, so `ads.com` can track the same user between different sites, if they all access it.
Naturally, some people don't like being tracked, so browsers allow to disable such cookies.
Also, some modern browsers employ special policies for such cookies:
- Safari does not allow third-party cookies at all.
- Firefox comes with a "black list" of third-party domains where it blocks third-party cookies.
```smart
If we load a script from a third-party domain, like `<script src="https://google-analytics.com/analytics.js">`, and that script uses `document.cookie` to set a cookie, then such cookie is not third-party.
If a script sets a cookie, then no matter where the script came from -- it belongs to the domain of the current webpage.
```
## Appendix: GDPR
This topic is not related to JavaScript at all, just something to keep in mind when setting cookies.
There's a legislation in Europe called GDPR, that enforces a set of rules for websites to respect users' privacy. And one of such rules is to require an explicit permission for tracking cookies from a user.
Please note, that's only about tracking/identifying cookies.
So, if we set a cookie that just saves some information, but neither tracks nor identifies the user, then we are free to do it.
But if we are going to set a cookie with an authentication session or a tracking id, then a user must allow that.
Websites generally have two variants of following GDPR. You must have seen them both already in the web:
1. If a website wants to set tracking cookies only for authenticated users.
To do so, the registration form should have a checkbox like "accept the privacy policy", the user must check it, and then the website is free to set auth cookies.
2. If a website wants to set tracking cookies for everyone.
To do so legally, a website shows a modal "splash screen" for newcomers, and require them to agree for cookies. Then the website can set them and let people see the content. That can be disturbing for new visitors though. No one likes to see "must-click" modal splash screens instead of the content. But GDPR requires an explicit agreement.
GDPR is not only about cookies, it's about other privacy-related issues too, but that's too much beyond our scope.
## Summary
`document.cookie` provides access to cookies
- write operations modify only cookies mentioned in it.
- name/value must be encoded.
- one cookie up to 4kb, 20+ cookies per site (depends on a browser).
Cookie options:
- `path=/`, by default current path, makes the cookie visible only under that path.
- `domain=site.com`, by default a cookie is visible on current domain only, if set explicitly to the domain, makes the cookie visible on subdomains.
- `expires` or `max-age` sets cookie expiration time, without them the cookie dies when the browser is closed.
- `secure` makes the cookie HTTPS-only.
- `samesite` forbids the browser to send the cookie with requests coming from outside the site, helps to prevent XSRF attacks.
Additionally:
- Third-party cookies may be forbidden by the browser, e.g. Safari does that by default.
- When setting a tracking cookie for EU citizens, GDPR requires to ask for permission.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

View file

@ -1,38 +0,0 @@
function getCookie(name) {
let matches = document.cookie.match(new RegExp(
"(?:^|; )" + name.replace(/([\.$?*|{}\(\)\[\]\\\/\+^])/g, '\\$1') + "=([^;]*)"
));
return matches ? decodeURIComponent(matches[1]) : undefined;
}
function setCookie(name, value, options = {}) {
options = {
path: '/',
// add other defaults here if necessary
...options
};
if (options.expires.toUTCString) {
options.expires = options.expires.toUTCString();
}
let updatedCookie = encodeURIComponent(name) + "=" + encodeURIComponent(value);
for (let optionKey in options) {
updatedCookie += "; " + optionKey;
let optionValue = options[optionKey];
if (optionValue !== true) {
updatedCookie += "=" + optionValue;
}
}
document.cookie = updatedCookie;
}
function deleteCookie(name) {
setCookie(name, "", {
'max-age': -1
})
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

View file

@ -1,10 +0,0 @@
<!doctype html>
<textarea style="width:200px; height: 60px;" id="area" placeholder="Write here"></textarea>
<br>
<button onclick="localStorage.removeItem('area');area.value=''">Clear</button>
<script>
area.value = localStorage.getItem('area');
area.oninput = () => {
localStorage.setItem('area', area.value)
};
</script>

View file

@ -1,2 +0,0 @@
<!doctype html>
<textarea style="width:200px; height: 60px;" id="area"></textarea>

View file

@ -1,10 +0,0 @@
# Autosave a form field
Create a `textarea` field that "autosaves" its value on every change.
So, if the user occasionally closes the page, and opens it again, he'll find his unfinished input at place.
Like this:
[iframe src="solution" height=120]

View file

@ -1,247 +0,0 @@
# LocalStorage, sessionStorage
Web storage objects `localStorage` and `sessionStorage` allow to save key/value pairs in the browser.
What's interesting about them is that the data survives a page refresh (for `sessionStorage`) and even a full browser restart (for `localStorage`). We'll see that very soon.
We already have cookies. Why additional objects?
- Unlike cookies, web storage objects are not sent to server with each request. Because of that, we can store much more. Most browsers allow at least 2 megabytes of data (or more) and have settings to configure that.
- The server can't manipulate storage objects via HTTP headers, everything's done in JavaScript.
- The storage is bound to the origin (domain/protocol/port triplet). That is, different protocols or subdomains infer different storage objects, they can't access data from each other.
Both storage objects provide same methods and properties:
- `setItem(key, value)` -- store key/value pair.
- `getItem(key)` -- get the value by key.
- `removeItem(key)` -- remove the key with its value.
- `clear()` -- delete everything.
- `key(index)` -- get the key on a given position.
- `length` -- the number of stored items.
Let's see how it works.
## localStorage demo
The main features of `localStorage` are:
- Shared between all tabs and windows from the same origin.
- The data does not expire. It remains after the browser restart and even OS reboot.
For instance, if you run this code...
```js run
localStorage.setItem('test', 1);
```
...And close/open the browser or just open the same page in a different window, then you can get it like this:
```js run
alert( localStorage.getItem('test') ); // 1
```
We only have to be on the same domain/port/protocol, the url path can be different.
The `localStorage` is shared, so if we set the data in one window, the change becomes visible in the other one.
## Object-like access
We can also use a plain object way of getting/setting keys, like this:
```js run
// set key
localStorage.test = 2;
// get key
alert( localStorage.test ); // 2
// remove key
delete localStorage.test;
```
That's allowed for historical reasons, and mostly works, but generally not recommended for two reasons:
1. If the key is user-generated, it can be anything, like `length` or `toString`, or another built-in method of `localStorage`. In that case `getItem/setItem` work fine, while object-like access fails:
```js run
let key = 'length';
localStorage[key] = 5; // Error, can't assign length
```
2. There's a `storage` event, it triggers when we modify the data. That event does not happen for object-like access. We'll see that later in this chapter.
## Looping over keys
Methods provide get/set/remove functionality. But how to get all the keys?
Unfortunately, storage objects are not iterable.
One way is to use "array-like" iteration:
```js run
for(let i=0; i<localStorage.length; i++) {
let key = localStorage.key(i);
alert(`${key}: ${localStorage.getItem(key)}`);
}
```
Another way is to use object-specific `for key in localStorage` loop.
That iterates over keys, but also outputs few built-in fields that we don't need:
```js run
// bad try
for(let key in localStorage) {
alert(key); // shows getItem, setItem and other built-in stuff
}
```
...So we need either to filter fields from the prototype with `hasOwnProperty` check:
```js run
for(let key in localStorage) {
if (!localStorage.hasOwnProperty(key)) {
continue; // skip keys like "setItem", "getItem" etc
}
alert(`${key}: ${localStorage.getItem(key)}`);
}
```
...Or just get the "own" keys with `Object.keys` and then loop over them if needed:
```js run
let keys = Object.keys(localStorage);
for(let key of keys) {
alert(`${key}: ${localStorage.getItem(key)}`);
}
```
The latter works, because `Object.keys` only returns the keys that belong to the object, ignoring the prototype.
## Strings only
Please note that both key and value must be strings.
If we any other type, like a number, or an object, it gets converted to string automatically:
```js run
sessionStorage.user = {name: "John"};
alert(sessionStorage.user); // [object Object]
```
We can use `JSON` to store objects though:
```js run
sessionStorage.user = JSON.stringify({name: "John"});
// sometime later
let user = JSON.parse( sessionStorage.user );
alert( user.name ); // John
```
Also it is possible to stringify the whole storage object, e.g. for debugging purposes:
```js run
// added formatting options to JSON.stringify to make the object look nicer
alert( JSON.stringify(localStorage, null, 2) );
```
## sessionStorage
The `sessionStorage` object is used much less often than `localStorage`.
Properties and methods are the same, but it's much more limited:
- The `sessionStorage` exists only within the current browser tab.
- Another tab with the same page will have a different storage.
- But it is shared between iframes in the tab (assuming they come from the same origin).
- The data survives page refresh, but not closing/opening the tab.
Let's see that in action.
Run this code...
```js run
sessionStorage.setItem('test', 1);
```
...Then refresh the page. Now you can still get the data:
```js run
alert( sessionStorage.getItem('test') ); // after refresh: 1
```
...But if you open the same page in another tab, and try again there, the code above returns `null`, meaning "nothing found".
That's exactly because `sessionStorage` is bound not only to the origin, but also to the browser tab. For that reason, `sessionStorage` is used sparingly.
## Storage event
When the data gets updated in `localStorage` or `sessionStorage`, [storage](https://www.w3.org/TR/webstorage/#the-storage-event) event triggers, with properties:
- `key` the key that was changed (null if `.clear()` is called).
- `oldValue` the old value (`null` if the key is newly added).
- `newValue` the new value (`null` if the key is removed).
- `url` the url of the document where the update happened.
- `storageArea` either `localStorage` or `sessionStorage` object where the update happened.
The important thing is: the event triggers on all `window` objects where the storage is accessible, except the one that caused it.
Let's elaborate.
Imagine, you have two windows with the same site in each. So `localStorage` is shared between them.
```online
You might want to open this page in two browser windows to test the code below.
```
Now if both windows are listening for `window.onstorage`, then each one will react on updates that happened in the other one.
```js run
// triggers on updates made to the same storage from other documents
window.onstorage = event => {
if (event.key != 'now') return;
alert(event.key + ':' + event.newValue + " at " + event.url);
};
localStorage.setItem('now', Date.now());
```
Please note that the event also contains: `event.url` -- the url of the document where the data was updated.
Also, `event.storageArea` contains the storage object -- the event is the same for both `sessionStorage` and `localStorage`, so `storageArea` references the one that was modified. We may event want to set something back in it, to "respond" to a change.
**That allows different windows from the same origin to exchange messages.**
Modern browsers also support [Broadcast channel API](https://developer.mozilla.org/en-US/docs/Web/API/Broadcast_Channel_API), the special API for same-origin inter-window communication, it's more full featured, but less supported. There are libraries that polyfill that API, based on `localStorage`, that make it available everywhere.
## Summary
Web storage objects `localStorage` and `sessionStorage` allow to store key/value in the browser.
- Both `key` and `value` must be strings.
- The limit is 2mb+, depends on the browser.
- They do not expire.
- The data is bound to the origin (domain/port/protocol).
| `localStorage` | `sessionStorage` |
|----------------|------------------|
| Shared between all tabs and windows with the same origin | Visible within a browser tab, including iframes from the same origin |
| Survives browser restart | Dies on tab close |
API:
- `setItem(key, value)` -- store key/value pair.
- `getItem(key)` -- get the value by key.
- `removeItem(key)` -- remove the key with its value.
- `clear()` -- delete everything.
- `key(index)` -- get the key on a given position.
- `length` -- the number of stored items.
- Use `Object.keys` to get all keys.
- Can use the keys as object properties, in that case `storage` event doesn't trigger.
Storage event:
- Triggers on `setItem`, `removeItem`, `clear` calls.
- Contains all the data about the operation, the document `url` and the storage object.
- Triggers on all `window` objects that have access to the storage except the one that generated it (within a tab for `sessionStorage`, globally for `localStorage`).

View file

@ -1,9 +0,0 @@
<!doctype html>
<script>
window.addEventListener('storage', event => {
alert("iframe.html: onstorage");
});
</script>
<button onclick="sessionStorage.setItem('now', new Date())">sessionStorage.setItem</button>
</body>
</html>

View file

@ -1,10 +0,0 @@
<!doctype html>
<script>
window.addEventListener('storage', event => {
alert("index.html: onstorage");
});
</script>
<button onclick="sessionStorage.setItem('now', new Date())">sessionStorage.setItem</button>
<iframe src="iframe.html" style="height:100px"></iframe>
</body>
</html>

View file

@ -1,754 +0,0 @@
libs:
- 'https://cdn.jsdelivr.net/npm/idb@3.0.2/build/idb.min.js'
---
# IndexedDB
IndexedDB is a built-in database, much more powerful than `localStorage`.
- Key/value storage: value can be (almost) anything, multiple key types.
- Supports transactions for reliability.
- Supports key range queries, indexes.
- Can store much more data than `localStorage`.
That power is usually excessive for traditional client-server apps. IndexedDB is intended for offline apps, to be combined with ServiceWorkers and other technologies.
The native interface to IndexedDB, described in the specification <https://www.w3.org/TR/IndexedDB>, is event-based.
We can also use `async/await` with the help of a promise-based wrapper, like <https://github.com/jakearchibald/idb>. That's pretty convenient, but the wrapper is not perfect, it can't replace events for all cases, so we'll start with events, and then use the wrapper.
## Open database
To start working with IndexedDB, we need to open a database.
The syntax:
```js
let openRequest = indexedDB.open(name, version);
```
- `name` -- a string, the database name.
- `version` -- a positive integer version, by default `1` (explained below).
We can have many databases with different names, all within the current origin (domain/protocol/port). So different websites can't access databases of each other.
After the call, we need to listen to events on `openRequest` object:
- `success`: database is ready, use the database object `openRequest.result` for further work.
- `error`: open failed.
- `upgradeneeded`: database version is outdated (see below).
**IndexedDB has a built-in mechanism of "schema versioning", absent in server-side databases.**
Unlike server-side databases, IndexedDB is client-side, we don't have the data at hands. But when we publish a new version of our app, we may need to update the database.
If the local database version is less than specified in `open`, then a special event `upgradeneeded` is triggered, and we can compare versions and upgrade data structures as needed.
The event also triggers when the database did not exist yet, so we can perform initialization.
For instance, when we first publish our app, we open it with version `1` and perform the initialization in `upgradeneeded` handler:
```js
let openRequest = indexedDB.open("store", *!*1*/!*);
openRequest.onupgradeneeded = function() {
// triggers if the client had no database
// ...perform initialization...
};
openRequest.onerror = function() {
console.error("Error", openResult.error);
};
openRequest.onsuccess = function() {
let db = openRequest.result;
// continue to work with database using db object
};
```
When we publish the 2nd version:
```js
let openRequest = indexedDB.open("store", *!*2*/!*);
// check the existing database version, do the updates if needed:
openRequest.onupgradeneeded = function() {
let db = openRequest.result;
switch(db.version) { // existing (old) db version
case 0:
// version 0 means that the client had no database
// perform initialization
case 1:
// client had version 1
// update
}
};
```
After `openRequest.onsuccess` we have the database object in `openRequest.result`, that we'll use for further operations.
To delete a database:
```js
let deleteRequest = indexedDB.deleteDatabase(name)
// deleteRequest.onsuccess/onerror tracks the result
```
## Object store
An object store is a core concept of IndexedDB. Counterparts in other databases are called "tables" or "collections". It's where the data is stored. A database may have multiple stores: one for users, another one for goods, etc.
Despite being named an "object store", primitives can be stored too.
**We can store almost any value, including complex objects.**
IndexedDB uses the [standard serialization algorithm](https://www.w3.org/TR/html53/infrastructure.html#section-structuredserializeforstorage) to clone-and-store an object. It's like `JSON.stringify`, but more powerful, capable of storing much more datatypes.
An example of object that can't be stored: an object with circular references. Such objects are not serializable. `JSON.stringify` also fails for such objects.
**There must be an unique `key` for every value in the store.**
A key must have a type one of: number, date, string, binary, or array. It's a unique object identifier: we can search/remove/update values by the key.
![](indexeddb-structure.png)
We can provide a key when we add an value to the store, similar to `localStorage`. That's good for storing primitive values. But when we store objects, IndexedDB allows to setup an object property as the key, that's much more convenient. Or we can auto-generate keys.
The syntax to create an object store:
```js
db.createObjectStore(name[, keyOptions]);
```
Please note, the operation is synchronous, no `await` needed.
- `name` is the store name, e.g. `"books"` for books,
- `keyOptions` is an optional object with one of two properties:
- `keyPath` -- a path to an object property that IndexedDB will use as the key, e.g. `id`.
- `autoIncrement` -- if `true`, then the key for a newly stored object is generated automatically, as an ever-incrementing number.
If we don't supply any options, then we'll need to provide a key explicitly later, when storing an object.
For instance, this object store uses `id` property as the key:
```js
db.createObjectStore('books', {keyPath: 'id'});
```
**An object store can only be created/modified while updating the DB version, in `upgradeneeded` handler.**
That's a technical limitation. Outside of the handler we'll be able to add/remove/update the data, but object stores are changed only during version update.
To do an upgrade, there are two main ways:
1. We can compare versions and run per-version operations.
2. Or we can get a list of existing object stores as `db.objectStoreNames`. That object is a [DOMStringList](https://html.spec.whatwg.org/multipage/common-dom-interfaces.html#domstringlist), and it provides `contains(name)` method to check for the existance. And then we can do updates depending on what exists.
Here's the demo of thee second approach:
```js
let openRequest = indexedDB.open("db", 1);
// create an object store for books if not exists
openRequest.onupgradeneeded = function() {
let db = openRequest.result;
if (!db.objectStoreNames.contains('books')) {
db.createObjectStore('books', {keyPath: 'id'});
}
};
```
To delete an object store:
```js
db.deleteObjectStore('books')
```
## Transactions
The term "transaction" is generic, used in many kinds of databases.
A transaction is a group operations, that should either all succeed or all fail.
For instance, when a person buys something, we need:
1. Subtract the money from their account.
2. Add the item to their inventory.
It would be pretty bad if we complete the 1st operation, and then something goes wrong, e.g. lights out, and we fail to do the 2nd. Both should either succeed (purchase complete, good!) or both fail (at least the person kept their money, so they can retry).
Transactions can guarantee that.
**All data operations must be made within a transaction in IndexedDB.**
To start a transaction:
```js run
db.transaction(store[, type]);
```
- `store` is a store name that the transaction is going to access, e.g. `"books"`. Can be an array of store names if we're going to access multiple stores.
- `type` a transaction type, one of:
- `readonly` -- can only read, the default.
- `readwrite` -- can only read and write, but not modify object stores.
There'is also `versionchange` transaction type: such transactions can do everything, but we can't create them manually. IndexedDB automatically creates a `versionchange` transaction when opening the database, for `updateneeded` handler. That's why it's a single place where we can update the database structure, create/remove object stores.
```smart header="What are transaction types for?"
Performance is the reason why transactions need to be labeled either `readonly` and `readwrite`.
Many `readonly` transactions can access concurrently the same store, but `readwrite` transactions can't. A `readwrite` transaction "locks" the store for writing. The next transaction must wait before the previous one finishes before accessing the same store.
```
After the transaction is created, we can add an item to the store, like this:
```js
let transaction = db.transaction("books", "readwrite"); // (1)
// get an object store to operate on it
*!*
let books = transaction.objectStore("books"); // (2)
*/!*
let book = {
id: 'js',
price: 10,
created: new Date()
};
*!*
let request = books.add(book); // (3)
*/!*
request.onsuccess = function() { // (4)
console.log("Book added to the store", request.result);
};
request.onerror = function() {
console.log("Error", request.error);
};
```
There are basically four steps:
1. Create a transaction, mention all stores it's going to access, at `(1)`.
2. Get the store object using `transaction.objectStore(name)`, at `(2)`.
3. Perform the request to the object store `books.add(book)`, at `(3)`.
4. ...Handle request success/error `(4)`, make other requests if needed, etc.
Object stores support two methods to store a value:
- **put(value, [key])**
Add the `value` to the store. The `key` is supplied only if the object store did not have `keyPath` or `autoIncrement` option. If there's already a value with same key, it will be replaced.
- **add(value, [key])**
Same as `put`, but if there's already a value with the same key, then the request fails, and an error with the name `"ConstraintError"` is generated.
Just like when opening a database, we send a request: `books.add(book)`, and then wait for `success/error` events.
- The `request.result` for `add` is the key of the new object.
- The error is in `request.error` (if any).
## Transactions autocommit
In the example above we started the transaction and made `add` request. We could make more requests. How do we finish ("commit") the transaction?
The short answer is: we don't.
In the next version 3.0 of the specification, there will probably be a manual way to finish the transaction, but right now in 2.0 there isn't.
**When all transaction requests are finished, and the [microtasks queue](info:microtask-queue) is empty, it is committed automatically.**
```smart header="What's an \"empty microtask queue\"?"
The microtask queue is explained in [another chapter](info:async-await#microtask-queue). In short, an empty microtask queue means that for all settled promises their `.then/catch/finally` handlers are executed.
In other words, handling of finished promises and resuming "awaits" is done before closing the transaction.
That's a minor technical detail. If we're using `async/await` instead of low-level promise calls, then we can assume that a transaction commits when all its requests are done, and the current code finishes.
```
So, in the example above no special code is needed to finish the transaction.
Transactions auto-commit principle has an important side effect. We can't insert an async operation like `fetch`, `setTimeout` in the middle of transaction. IndexedDB will not keep the transaction waiting till these are done.
In the code below `request2` in line `(*)` fails, because the transaction is already committed, can't make any request in it:
```js
let request1 = books.add(book);
request1.onsuccess = function() {
fetch('/').then(response => {
*!*
let request2 = books.add(anotherBook); // (*)
*/!*
request2.onerror = function() {
console.log(request2.error.name); // TransactionInactiveError
};
});
};
```
That's because `fetch` is an asynchronous operation, a macrotask. Transactions are closed before the browser starts doing macrotasks.
Authors of IndexedDB spec believe that transactions should be short-lived. Mostly for performance reasons.
Notably, `readwrite` transactions "lock" the stores for writing. So if one part of application initiated `readwrite` on `books` object store, then another part that wants to do the same has to wait: the new transaction "hangs" till the first one is done. That can lead to strange delays if transactions take a long time.
So, what to do?
In the example above we could make a new `db.transaction` right before the new request `(*)`.
But it will be even better, if we'd like to keep the operations together, in one transaction, to split apart IndexedDB transactions and "other" async stuff.
First, make `fetch`, prepare the data if needed, afterwards create a transaction and perform all the database requests, it'll work then.
To detect the moment of successful completion, we can listen to `transaction.oncomplete` event:
```js
let transaction = db.transaction("books", "readwrite");
// ...perform operations...
transaction.oncomplete = function() {
console.log("Transaction is complete");
};
```
Only `complete` guarantees that the transaction is saved as a whole. Individual requests may succeed, but the final write operation may go wrong (e.g. I/O error or something).
To manually abort the transaction, call:
```js
transaction.abort();
```
That cancels all modification made by the requests in it and triggers `transaction.onabort` event.
## Error handling
Write requests may fail.
That's to be expected, not only because of possible errors at our side, but also for reasons not related to the transaction itself. For instance, the storage quota may be exceeded. So we must be ready to handle such case.
**A failed request automatically aborts the transaction, canceling all its changes.**
Sometimes a request may fail with a non-critical error. We'd like to handle it in `request.onerror` and continue the transaction. Then, to prevent the transaction abort, we should call `event.preventDefault()`.
In the example below a new book is added with the same key (`id`). The `store.add` method generates a `"ConstraintError"` in that case. We handle it without canceling the transaction:
```js
let transaction = db.transaction("books", "readwrite");
let book = { id: 'js', price: 10 };
let request = transaction.objectStore("books").add(book);
request.onerror = function(event) {
// ConstraintError occurs when an object with the same id already exists
if (request.error.name == "ConstraintError") {
console.log("Book with such id already exists"); // handle the error
event.preventDefault(); // don't abort the transaction
} else {
// unexpected error, can't handle it
// the transaction will abort
}
};
transaction.onabort = function() {
console.log("Error", transaction.error);
};
```
### Event delegation
Do we need onerror/onsuccess for every request? Not every time. We can use event delegation instead.
**IndexedDB events bubble: `request` -> `transaction` -> `database`.**
All events are DOM events, with capturing and bubbling, but usually only bubbling stage is used.
So we can catch all errors using `db.onerror` handler, for reporting or other purposes:
```js
db.onerror = function(event) {
let request = event.target; // the request that caused the error
console.log("Error", request.error);
};
```
...But what if an error is fully handled? We don't want to report it in that case.
We can stop the bubbling and hence `db.onerror` by using `event.stopPropagation()` in `request.onerror`.
```js
request.onerror = function(event) {
if (request.error.name == "ConstraintError") {
console.log("Book with such id already exists"); // handle the error
event.preventDefault(); // don't abort the transaction
event.stopPropagation(); // don't bubble error up, "chew" it
} else {
// do nothing
// transaction will be aborted
// we can take care of error in transaction.onabort
}
};
```
## Searching by keys
There are two main ways to search in an object store:
1. By a key or a key range. That is: by `book.id` in our "books" storage.
2. By another object field, e.g. `book.price`. We need an index for that.
First let's deal with the keys and key ranges `(1)`.
Methods that involve searching support either exact keys or so-called "range queries" -- [IDBKeyRange](https://www.w3.org/TR/IndexedDB/#keyrange) objects that specify a "key range".
Ranges are created using following calls:
- `IDBKeyRange.lowerBound(lower, [open])` means: `>lower` (or `≥lower` if `open` is true)
- `IDBKeyRange.upperBound(upper, [open])` means: `<upper` (or `≤upper` if `open` is true)
- `IDBKeyRange.bound(lower, upper, [lowerOpen], [upperOpen])` means: between `lower` and `upper`, with optional equality if the corresponding `open` is true.
- `IDBKeyRange.only(key)` -- a range that consists of only one `key`, rarely used.
All searching methods accept a `query` argument that can be either an exact key or a key range:
- `store.get(query)` -- search for the first value by a key or a range.
- `store.getAll([query], [count])` -- search for all values, limit by `count` if given.
- `store.getKey(query)` -- search for the first key that satisfies the query, usually a range.
- `store.getAllKeys([query], [count])` -- search for all keys that satisfy the query, usually a range, up to `count` if given.
- `store.count([query])` -- get the total count of keys that satisfy the query, usually a range.
For instance, we have a lot of books in our store. Remember, the `id` field is the key, so all these methods can search by `id`.
Request examples:
```js
// get one book
books.get('js')
// get books with 'css' < id < 'html'
books.getAll(IDBKeyRange.bound('css', 'html'))
// get books with 'html' <= id
books.getAll(IDBKeyRange.lowerBound('html', true))
// get all books
books.getAll()
// get all keys: id >= 'js'
books.getAllKeys(IDBKeyRange.lowerBound('js', true))
```
```smart header="Object store is always sorted"
Object store sorts values by key internally.
So requests that return many values always return them in sorted by key order.
```
## Searching by any field with an index
To search by other object fields, we need to create an additional data structure named "index".
An index is an "add-on" to the store that tracks a given object field. For each value of that field, it stores a list of keys for objects that have that value. There will be a more detailed picture below.
The syntax:
```js
objectStore.createIndex(name, keyPath, [options]);
```
- **`name`** -- index name,
- **`keyPath`** -- path to the object field that the index should track (we're going to search by that field),
- **`option`** -- an optional object with properties:
- **`unique`** -- if true, then there may be only one object in the store with the given value at the `keyPath`. The index will enforce that by generating an error if we try to add a duplicate.
- **`multiEntry`** -- only used if there value on `keyPath` is an array. In that case, by default, the index will treat the whole array as the key. But if `multiEntry` is true, then the index will keep a list of store objects for each value in that array. So array members become index keys.
In our example, we store books keyed by `id`.
Let's say we want to search by `price`.
First, we need to create an index. It must be done in `upgradeneeded`, just like an object store:
```js
openRequest.onupgradeneeded = function() {
// we must create the index here, in versionchange transaction
let books = db.createObjectStore('books', {keyPath: 'id'});
*!*
let index = inventory.createIndex('price_idx', 'price');
*/!*
};
```
- The index will track `price` field.
- The price is not unique, there may be multiple books with the same price, so we don't set `unique` option.
- The price is not an array, so `multiEntry` flag is not applicable.
Imagine that our `inventory` has 4 books. Here's the picture that shows exactly what the `index` is:
![](indexeddb-index.png)
As said, the index for each value of `price` (second argument) keeps the list of keys that have that price.
The index keeps itself up to date automatically, we don't have to care about it.
Now, when we want to search for a given price, we simply apply the same search methods to the index:
```js
let transaction = db.transaction("books"); // readonly
let books = transaction.objectStore("books");
let priceIndex = books.index("price_idx");
*!*
let request = priceIndex.getAll(10);
*/!*
request.onsuccess = function() {
if (request.result !== undefined) {
console.log("Books", request.result); // array of books with price=10
} else {
console.log("No such books");
}
};
```
We can also use `IDBKeyRange` to create ranges and looks for cheap/expensive books:
```js
// find books where price < 5
let request = priceIndex.getAll(IDBKeyRange.upperBound(5));
```
Indexes are internally sorted by the tracked object field, `price` in our case. So when we do the search, the results are also sorted by `price`.
## Deleting from store
The `delete` method looks up values to delete by a query, just like `getAll`.
- **`delete(query)`** -- delete matching values by query.
For instance:
```js
// delete the book with id='js'
books.delete('js');
```
If we'd like to delete books based on a price or another object field, then we should first find the key in the index, and then call `delete`:
```js
// find the key where price = 5
let request = priceIndex.getKey(5);
request.onsuccess = function() {
let id = request.result;
let deleteRequest = books.delete(id);
};
```
To delete everything:
```js
books.clear(); // clear the storage.
```
## Cursors
Methods like `getAll/getAllKeys` return an array of keys/values.
But an object storage can be huge, bigger than the available memory.
Then `getAll` will fail to get all records as an array.
What to do?
Cursors provide the means to work around that.
**A *cursor* is a special object that traverses the object storage, given a query, and returns one key/value at a time, thus saving memory.**
As an object store is sorted internally by key, a cursor walks the store in key order (ascending by default).
The syntax:
```js
// like getAll, but with a cursor:
let request = store.openCursor(query, [direction]);
// to get keys, not values (like getAllKeys): store.openKeyCursor
```
- **`query`** is a key or a key range, same as for `getAll`.
- **`direction`** is an optional argument, which order to use:
- `"next"` -- the default, the cursor walks up from the record with the lowest key.
- `"prev"` -- the reverse order: down from the record with the biggest key.
- `"nextunique"`, `"prevunique"` -- same as above, but skip records with the same key (only for cursors over indexes, e.g. for multiple books with price=5 only the first one will be returned).
**The main difference of the cursor is that `request.onsuccess` triggers multiple times: once for each result.**
Here's an example of how to use a cursor:
```js
let transaction = db.transaction("books");
let books = transaction.objectStore("books");
let request = books.openCursor();
// called for each book found by the cursor
request.onsuccess = function() {
let cursor = request.result;
if (cursor) {
let key = cursor.key; // book key (id field)
let value = cursor.value; // book object
console.log(key, value);
cursor.continue();
} else {
console.log("No more books");
}
};
```
The main cursor methods are:
- `advance(count)` -- advance the cursor `count` times, skipping values.
- `continue([key])` -- advance the cursor to the next value in range matching or after key.
Whether there are more values matching the cursor or not -- `onsuccess` gets called, and then in `result` we can get the cursor pointing to the next record, or `undefined`.
In the example above the cursor was made for the object store.
But we also can make a cursor over an index. As we remember, indexes allow to search by an object field. Cursors over indexes to precisely the same as over object stores -- they save memory by returning one value at a timee.
For cursors over indexes, `cursor.key` is the index key (e.g. price), and we should use `cursor.primaryKey` property the object key:
```js
let request = priceIdx.openCursor(IDBKeyRange.upperBound(5));
// called for each record
request.onsuccess = function() {
let cursor = request.result;
if (cursor) {
let key = cursor.primaryKey; // next object store key (id field)
let value = cursor.value; // next object store object (book object)
let key = cursor.key; // next index key (price)
console.log(key, value);
cursor.continue();
} else {
console.log("No more books");
}
};
```
## Promise wrapper
Adding `onsuccess/onerror` to every request is quite a cumbersome task. Sometimes we can make our life easier by using event delegation, e.g. set handlers on the whole transactions, but `async/await` is much more convenient.
Let's use a thin promise wrapper <https://github.com/jakearchibald/idb> further in this chapter. It creates a global `idb` object with [promisified](info:promisify) IndexedDB methods.
Then, instead of `onsuccess/onerror` we can write like this:
```js
let db = await idb.openDb('store', 1, db => {
if (db.oldVersion == 0) {
// perform the initialization
db.createObjectStore('books', {keyPath: 'id'});
}
});
let transaction = db.transaction('books', 'readwrite');
let books = transaction.objectStore('books');
try {
await books.add(...);
await books.add(...);
await transaction.complete;
console.log('jsbook saved');
} catch(err) {
console.log('error', err.message);
}
```
So we have all the sweet "plain async code" and "try..catch" stuff.
### Error handling
If we don't catch the error, then it falls through, just as usual.
An uncaught error becomes an "unhandled promise rejection" event on `window` object.
We can handle such errors like this:
```js
window.addEventListener('unhandledrejection', event => {
let request = event.target; // IndexedDB native request object
let error = event.reason; // Unhandled error object, same as request.error
...report about the error...
});
```
### "Inactive transaction" pitfall
A we know already, a transaction auto-commits as soon as the browser is done with the current code and microtasks. So if we put an *macrotask* like `fetch` in the middle of a transaction, then the transaction won't wait for it to finish. It just auto-commits. So the next request in it fails.
For a promise wrapper and `async/await` the situation is the same.
Here's an example of `fetch` in the middle of the transaction:
```js
let transaction = db.transaction("inventory", "readwrite");
let inventory = transaction.objectStore("inventory");
await inventory.add({ id: 'js', price: 10, created: new Date() });
await fetch(...); // (*)
await inventory.add({ id: 'js', price: 10, created: new Date() }); // Error
```
The next `inventory.add` after `fetch` `(*)` fails with an "inactive transaction" error, because the transaction is already committed and closed at that time.
The workaround is same as when working with native IndexedDB: either make a new transaction or just split things apart.
1. Prepare the data and fetch all that's needed first.
2. Then save in the database.
### Getting native objects
Internally, the wrapper performs a native IndexedDB request, adding `onerror/onsuccess` to it, and returns a promise that rejects/resolves with the result.
That works most fine of the time. The examples are at the lib page <https://github.com/jakearchibald/idb>.
In few rare cases, when we need the original `request` object, we can access it as `promise.request` property of the promise:
```js
let promise = books.add(book); // get a promise (don't await for its result)
let request = promise.request; // native request object
let transaction = request.transaction; // native transaction object
// ...do some native IndexedDB voodoo...
let result = await promise; // if still needed
```
## Summary
IndexedDB can be thought of as a "localStorage on steroids". It's a simple key-value database, powerful enough for offline apps, yet simple to use.
The best manual is the specification, [the current one](https://w3c.github.io/IndexedDB) is 2.0, but few methods from [3.0](https://w3c.github.io/IndexedDB/) (it's not much different) are partially supported.
The usage can be described with a few phrases:
1. Get a promise wrapper like [idb](https://github.com/jakearchibald/idb).
2. Open a database: `idb.openDb(name, version, onupgradeneeded)`
- Create object storages in indexes in `onupgradeneeded` handlers.
- Update version if needed - either by comparing numbers or just checking what exists.
3. For requests:
- Create transaction `db.transaction('books')` (readwrite if needed).
- Get the object store `transaction.objectStore('books')`.
4. Then, to search by a key, call methods on the object store directly.
- To search by an object field, create an index.
5. If the data does not fit in memory, use a cursor.
Here's a small demo app:
[codetabs src="books" current="index.html"]

View file

@ -1,70 +0,0 @@
<!doctype html>
<script src="https://cdn.jsdelivr.net/npm/idb@3.0.2/build/idb.min.js"></script>
<button onclick="addBook()">Add a book</button>
<button onclick="clearBooks()">Clear books</button>
<p>Books list:</p>
<ul id="listElem"></ul>
<script>
let db;
init();
async function init() {
db = await idb.openDb('booksDb', 1, db => {
db.createObjectStore('books', {keyPath: 'name'});
});
list();
}
async function list() {
let tx = db.transaction('books');
let bookStore = tx.objectStore('books');
let books = await bookStore.getAll();
if (books.length) {
listElem.innerHTML = books.map(book => `<li>
name: ${book.name}, price: ${book.price}
</li>`).join('');
} else {
listElem.innerHTML = '<li>No books yet. Please add books.</li>'
}
}
async function clearBooks() {
let tx = db.transaction('books', 'readwrite');
await tx.objectStore('books').clear();
await list();
}
async function addBook() {
let name = prompt("Book name?");
let price = +prompt("Book price?");
let tx = db.transaction('books', 'readwrite');
try {
await tx.objectStore('books').add({name, price});
await list();
} catch(err) {
if (err.name == 'ConstraintError') {
alert("Such book exists already");
await addBook();
} else {
throw err;
}
}
}
window.addEventListener('unhandledrejection', event => {
alert("Error: " + event.reason.message);
});
</script>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

View file

@ -1,2 +0,0 @@
# Storing data in the browser

View file

@ -1,249 +0,0 @@
# Mutation observer
`MutationObserver` is a built-in object that observes a DOM element and fires a callback in case of changes.
We'll first see syntax, and then explore a real-world use case.
## Syntax
`MutationObserver` is easy to use.
First, we create an observer with a callback-function:
```js
let observer = new MutationObserver(callback);
```
And then attach it to a DOM node:
```js
observer.observe(node, config);
```
`config` is an object with boolean options "what kind of changes to react on":
- `childList` -- changes in the direct children of `node`,
- `subtree` -- in all descendants of `node`,
- `attributes` -- attributes of `node`,
- `attributeOldValue` -- record the old value of attribute (infers `attributes`),
- `characterData` -- whether to observe `node.data` (text content),
- `characterDataOldValue` -- record the old value of `node.data` (infers `characterData`),
- `attributeFilter` -- an array of attribute names, to observe only selected ones.
Then after any changes, the `callback` is executed, with a list of [MutationRecord](https://dom.spec.whatwg.org/#mutationrecord) objects as the first argument, and the observer itself as the second argument.
[MutationRecord](https://dom.spec.whatwg.org/#mutationrecord) objects have properties:
- `type` -- mutation type, one of
- `"attributes"` (attribute modified)
- `"characterData"` (data modified)
- `"childList"` (elements added/removed),
- `target` -- where the change occured: an element for "attributes", or text node for "characterData", or an element for a "childList" mutation,
- `addedNodes/removedNodes` -- nodes that were added/removed,
- `previousSibling/nextSibling` -- the previous and next sibling to added/removed nodes,
- `attributeName/attributeNamespace` -- the name/namespace (for XML) of the changed attribute,
- `oldValue` -- the previous value, only for attribute or text changes.
For example, here's a `<div>` with `contentEditable` attribute. That attribute allows us to focus on it and edit.
```html run
<div contentEditable id="elem">Edit <b>me</b>, please</div>
<script>
let observer = new MutationObserver(mutationRecords => {
console.log(mutationRecords); // console.log(the changes)
});
observer.observe(elem, {
// observe everything except attributes
childList: true,
subtree: true,
characterDataOldValue: true
});
</script>
```
If we change the text inside `<b>me</b>`, we'll get a single mutation:
```js
mutationRecords = [{
type: "characterData",
oldValue: "me",
target: <text node>,
// other properties empty
}];
```
If we select and remove the `<b>me</b>` altogether, we'll get multiple mutations:
```js
mutationRecords = [{
type: "childList",
target: <div#elem>,
removedNodes: [<b>],
nextSibling: <text node>,
previousSibling: <text node>
// other properties empty
}, {
type: "characterData"
target: <text node>
// ...details depend on how the browser handles the change
// it may coalesce two adjacent text nodes "Edit " and ", please" into one node
// or it can just delete the extra space after "Edit".
// may be one mutation or a few
}];
```
## Observer use case
When `MutationObserver` is needed? Is there a scenario when such thing can be useful?
Sure, we can track something like `contentEditable` and create "undo/redo" stack, but here's an example where `MutationObserver` is good from architectural standpoint.
Let's say we're making a website about programming, like this one. Naturally, articles and other materials may contain source code snippets.
An HTML code snippet looks like this:
```html
...
<pre class="language-javascript"><code>
// here's the code
let hello = "world";
</code></pre>
...
```
There's also a JavaScript highlighting library, e.g. [Prism.js](https://prismjs.com/). A call to `Prism.highlightElem(pre)` examines the contents of such `pre` elements and adds colored syntax highlighting, similar to what you in examples here, this page.
Generally, when a page loads, e.g. at the bottom of the page, we can search for elements `pre[class*="language"]` and call `Prism.highlightElem` on them:
```js
// highlight all code snippets on the page
document.querySelectorAll('pre[class*="language"]').forEach(Prism.highlightElem);
```
Now the `<pre>` snippet looks like this (without line numbers by default):
```js
// here's the code
let hello = "world";
```
Everything's simple so far, right? There are `<pre>` code snippets in HTML, we highlight them.
Now let's go on. Let's say we're going to dynamically fetch materials from a server. We'll study methods for that [later in the tutorial](info:fetch-basics). For now it only matters that we fetch an HTML article from a webserver and display it on demand:
```js
let article = /* fetch new content from server */
articleElem.innerHTML = article;
```
The new `article` HTML may contain code snippets. We need to call `Prism.highlightElem` on them, otherwise they won't get highlighted.
**Who's responsibility is to call `Prism.highlightElem` for a dynamically loaded article?**
We could append that call to the code that loads an article, like this:
```js
let article = /* fetch new content from server */
articleElem.innerHTML = article;
*!*
let snippets = articleElem.querySelectorAll('pre[class*="language-"]');
snippets.forEach(Prism.highlightElem);
*/!*
```
...But imagine, we have many places where we load contents with code: articles, quizzes, forum posts. Do we need to put the highlighting call everywhere? Then we need to be careful, not to forget about it.
And what if we load the content into a third-party engine? E.g. we have a forum written by someone else, that loads contents dynamically, and we'd like to add syntax highlighting to it. No one likes to patch third-party scripts.
Luckily, there's another option.
We can use `MutationObserver` to automatically detect code snippets inserted in the page and highlight them.
So we'll handle the highlighting functionality in one place, relieving us from the need to integrate it.
## Dynamic highlight demo
Here's the working example.
If you run this code, it starts observing the element below and highlighting any code snippets that appear there:
```js run
let observer = new MutationObserver(mutations => {
for(let mutation of mutations) {
// examine new nodes
for(let node of mutation.addedNodes) {
// skip newly added text nodes
if (!(node instanceof HTMLElement)) continue;
// check the inserted element for being a code snippet
if (node.matches('pre[class*="language-"]')) {
Prism.highlightElement(node);
}
// search its subtree for code snippets
for(let elem of node.querySelectorAll('pre[class*="language-"]')) {
Prism.highlightElement(elem);
}
}
}
});
let demoElem = document.getElementById('highlight-demo');
observer.observe(demoElem, {childList: true, subtree: true});
```
<p id="highlight-demo" style="border: 1px solid #ddd">Demo element with <code>id="highlight-demo"</code>, obverved by the example above.</p>
The code below populates `innerHTML`. If you've run the code above, snippets will get highlighted:
```js run
let demoElem = document.getElementById('highlight-demo');
// dynamically insert content with code snippets
demoElem.innerHTML = `A code snippet is below:
<pre class="language-javascript"><code> let hello = "world!"; </code></pre>
<div>Another one:</div>
<div>
<pre class="language-css"><code>.class { margin: 5px; } </code></pre>
</div>
`;
```
Now we have `MutationObserver` that can track all highlighting in observed elements or the whole `document`. We can add/remove code snippets in HTML without thinking about it.
## Garbage collection
Observers use weak references to nodes internally. That is: if a node is removed from DOM, and becomes unreachable, then it becomes garbage collected, an observer doesn't prevent that.
Still, we can release observers any time:
- `observer.disconnect()` -- stops the observation.
Additionally:
- `mutationRecords = observer.takeRecords()` -- gets a list of unprocessed mutation records, those that happened, but the callback did not handle them.
```js
// we're going to disconnect the observer
// it might have not yet handled some mutations
let mutationRecords = observer.takeRecords();
// process mutationRecords
// now all handled, disconnect
observer.disconnect();
```
## Summary
`MutationObserver` can react on changes in DOM: attributes, added/removed elements, text content.
We can use it to track changes introduced by other parts of our own or 3rd-party code.
For example, to post-process dynamically inserted content, as demo `innerHTML`, like highlighting in the example above.

View file

@ -1,4 +0,0 @@
# Miscellaneous
Not yet categorized articles.