Fixed common spelling mistakes (#3544)

* fix spelling errors

* Update binary_sensor.xiaomi_aqara.markdown

Reverts to previous revision before spell check.

* Update tellstick.markdown

Reverts to previous revision before spell check.

* Update owntracks_two_mqtt_broker.markdown

Reverts to previous revision before spell check.

* Update cla_sign.html

Reverts to previous revision before spell check.

* Update credits.markdown

Reverts to previous revision before spell check.

* Update api.markdown

Fixed spell checker changing noone to no one.
This commit is contained in:
Ashton Campbell 2017-10-07 17:39:32 -05:00 committed by Fabian Affolter
parent ae24b5142f
commit 9e6b9cb658
68 changed files with 90 additions and 90 deletions

View file

@ -25,7 +25,7 @@ The requirement is that you have setup [Wink](/components/wink/).
- Window/Door sensors
- Motion sensors
- Ring Door bells (No hub required)
- Liquid presense sensors
- Liquid presence sensors
- Z-wave lock key codes
- Lutron connected bulb remote buttons
- Wink Relay buttons and presence detection

View file

@ -18,7 +18,7 @@ You need the `ffmpeg` binary in your system path. On Debian 8 or Raspbian (Jessi
</p>
<p class='note'>
If you are using [Hass.io](/hassio/) then just move forward to the configuration as all requirements are already fullfilled.
If you are using [Hass.io](/hassio/) then just move forward to the configuration as all requirements are already fulfilled.
</p>
To set it up, add the following information to your `configuration.yaml` file:

View file

@ -33,7 +33,7 @@ Configuration variables:
- **devices** array (*Required*): A list of lights to use.
- **[mac address]** (*Required*): The bluetooth address of the switch.
- **name** (*Optional*): The custom name to use in the frontend.
- **api_key** (*Required*): The API key to acces the device.
- **api_key** (*Required*): The API key to access the device.
<p class='note'>
If you get an error looking like this:

View file

@ -46,7 +46,7 @@ Every time someone rings the bell, a `nello_bell_ring` event will be fired.
Field | Description
----- | -----------
`address` | Postal address of the lock.
`date` | Date when the event occured.
`date` | Date when the event occurred.
`description` | Human readable string describing the event.
`location_id` | Nello ID of the location where the bell has been rung.
`short_id` | Shorter Nello ID.

View file

@ -41,7 +41,7 @@ media_extractor:
music: bestaudio[ext=mp3]
```
This configuration sets query for all service calls like: ```{"entity_id": "media_player.my_sonos", "media_content_id": "https://soundcloud.com/bruttoband/brutto-11", "media_content_type": "music"}``` to 'bestaudio' with mp3 extention.
This configuration sets query for all service calls like: ```{"entity_id": "media_player.my_sonos", "media_content_id": "https://soundcloud.com/bruttoband/brutto-11", "media_content_type": "music"}``` to 'bestaudio' with mp3 extension.
Query examples with explanations:
* **bestvideo** - best video only stream

View file

@ -43,7 +43,7 @@ Configuration variables:
- **port** (*Optional*): The port number. Defaults to 80.
- **password** (*Optional*): PIN code of the Internet Radio. Defaults to 1234.
Some models use a seperate port (2244) for API access, this can be verified by visiting http://[host]:[port]/device.
Some models use a separate port (2244) for API access, this can be verified by visiting http://[host]:[port]/device.
In case your device (friendly name) is called *badezimmer*, an example automation can look something like this:

View file

@ -14,7 +14,7 @@ ha_release: 0.37
The [Discord service](https://discordapp.com/) is a platform for the notify component. This allows components to send messages to the user using Discord.
In order to get a token you need to go to the [Discord My Apps page](https://discordapp.com/developers/applications/me) and create a new application. Once the application is ready, create a [bot](https://discordapp.com/developers/docs/topics/oauth2#bots) user (**Create a Bot User**) and activate **Require OAuth2 Code Grant**. Retreive the **Client ID** and the (hidden) **Token** of your bot for later.
In order to get a token you need to go to the [Discord My Apps page](https://discordapp.com/developers/applications/me) and create a new application. Once the application is ready, create a [bot](https://discordapp.com/developers/docs/topics/oauth2#bots) user (**Create a Bot User**) and activate **Require OAuth2 Code Grant**. Retrieve the **Client ID** and the (hidden) **Token** of your bot for later.
When setting up the application you can use this [icon](https://home-assistant.io/demo/favicon-192x192.png).

View file

@ -132,7 +132,7 @@ AQI | Status | Description
201 - 300 | **Very unhealthy** | Health warnings of emergency conditions. The entire population is more likely to be affected
301+ | **Hazardous** | Health alert: everyone may experience more serious health effects
### Air Polution Level
### Air Pollution Level
**Description:** This sensor displays the associated `Status` (from the above
table) for the current AQI.

View file

@ -70,7 +70,7 @@ $ python3
{'thing': 'ha-sensor', 'created': '2015-12-10T09:46:08.559Z', 'content': {'humiditiy': 81, 'temperature': 23}}
```
Recieve the latest dweet.
Receive the latest dweet.
```bash
>>> dweepy.get_latest_dweet_for('ha-sensor')

View file

@ -17,7 +17,7 @@ The `vera` platform allows you to get data from your [Vera](http://getvera.com/)
They will be automatically discovered if the vera component is loaded.
Please note that some vera sensors (such as _motion_ and _flood_ sensors) are _armable_ which means that vera will send alerts (email messages ot txts) when they are _armed_ an change state.
Please note that some vera sensors (such as _motion_ and _flood_ sensors) are _armable_ which means that vera will send alerts (email messages to txts) when they are _armed_ an change state.
Home Assistant will display the state of these sensors regardless of the _armed_ state.

View file

@ -29,7 +29,7 @@ Configuration variables:
- **language** (*Optional*): The language to use. Defaults to `en-US`. Supported `en-US`, `ru-RU`, `uk-UK`, `tr-TR`.
- **codec** (*Optional*): Audio codec. Default is `mp3`. Supported us `mp3`, `wav`, `opus`.
- **voice** (*Optional*): Speaker voice. Default is `zahar`. Supported female voices are `jane`, `oksana`, `alyss`, `omazh` and male voices are `zahar` and `ermil`.
- **emotion** (*Optional*): Speaker emotional intonation. Default is `neutral`. Also supported are `good` (freindly) and `evil` (angry)
- **emotion** (*Optional*): Speaker emotional intonation. Default is `neutral`. Also supported are `good` (friendly) and `evil` (angry)
- **speed** (*Optional*): Speech speed. Default value is `1`. Highest speed is `3` and lowest `0,1`
Please check the [API documentation](https://tech.yandex.com/speechkit/cloud/doc/guide/concepts/tts-http-request-docpage/) for details. It seems that the English version of documentation is outdated. You could request an API key [by email](https://tech.yandex.com/speechkit/cloud/) or [online](https://developer.tech.yandex.ru/).