Merge branch 'current' into next

This commit is contained in:
Fabian Affolter 2017-10-10 00:39:51 +02:00
commit 5d0a917cdd
No known key found for this signature in database
GPG key ID: DDF3D6F44AAB1336
112 changed files with 1014 additions and 352 deletions

View file

@ -33,7 +33,7 @@ The built-in Alexa component allows you to integrate Home Assistant into Alexa/A
### {% linkable_title Requirements %}
Amazon requires the endpoint of a skill to be hosted via SSL. Self-signed certificates are OK because our skills will only run in development mode. Read more on [our blog][blog-lets-encrypt] about how to set up encryption for Home Assistant. When running Hass.io, using the [Let's Encrypt](/addons/lets_encrypt/) the and [Duck DNS](/addons/duckdns/) add-ons is the easiest method. If you are unable to get HTTPS up and running, consider using [this AWS Lambda proxy for Alexa skills](https://community.home-assistant.io/t/aws-lambda-proxy-custom-alexa-skill-when-you-dont-have-https/5230).
Amazon requires the endpoint of a skill to be hosted via SSL. Self-signed certificates are OK because our skills will only run in development mode. Read more on [our blog][blog-lets-encrypt] about how to set up encryption for Home Assistant. When running Hass.io, using the [Let's Encrypt](/addons/lets_encrypt/) and [Duck DNS](/addons/duckdns/) add-ons is the easiest method. If you are unable to get HTTPS up and running, consider using [this AWS Lambda proxy for Alexa skills](https://community.home-assistant.io/t/aws-lambda-proxy-custom-alexa-skill-when-you-dont-have-https/5230).
Additionally, note that at the time of this writing, your Alexa skill endpoint *must* accept requests over port 443 (Home Assistant default to 8123). There are two ways you can handle this:

View file

@ -25,7 +25,7 @@ The requirement is that you have setup [Wink](/components/wink/).
- Window/Door sensors
- Motion sensors
- Ring Door bells (No hub required)
- Liquid presense sensors
- Liquid presence sensors
- Z-wave lock key codes
- Lutron connected bulb remote buttons
- Wink Relay buttons and presence detection

View file

@ -12,17 +12,18 @@ ha_category: Cover
ha_release: 0.55
---
The `rflink` component support devices that use [RFLink gateway firmware](http://www.nemcon.nl/blog2/), for example, the [Nodo RFLink Gateway](https://www.nodo-shop.nl/nl/21-rflink-gateway). RFLink gateway is an Arduino firmware that allows two-way communication with a multitude of RF wireless devices using cheap hardware (Arduino + transceiver).
The `rflink` cover platform supports devices that use [RFLink gateway firmware](http://www.nemcon.nl/blog2/), for example, the [Nodo RFLink Gateway](https://www.nodo-shop.nl/nl/21-rflink-gateway). RFLink gateway is an Arduino firmware that allows two-way communication with a multitude of RF wireless devices using cheap hardware (Arduino + transceiver).
First, you have to set up your [rflink hub](/components/rflink/).
After configuring the RFLink hub covers will be automatically discovered and added. Except the Somfy RTS devices.
### {% setting up a Somfy RTS device%}
### {% linkable_title Setting up a Somfy RTS device %}
You have to add the Somfy RTS manually with the supplied RFlinkLoader (Windows Only)
You have to add the Somfy RTS manually with the supplied RFlinkLoader (Windows only).
Press the Learn button on the original Somfy remote
enter the following code within 3 seconds. Your blinds will go up and down shortly
Press the Learn button on the original Somfy remote enter the following code within 3 seconds. Your blinds will go up and down shortly:
````
10;RTS;02FFFF;0412;3;PAIR;
@ -53,7 +54,7 @@ RTS Record: 14 Address: FFFFFF RC: FFFF
RTS Record: 15 Address: FFFFFF RC: FFFF
````
After configuring the RFLink Somfy RTS you have to add the cover to the configuration like any other RFlink device.
After configuring the RFLink Somfy RTS you have to add the cover to the `configuration.yaml` file like any other RFlink device.
RFLink cover ID's are composed of: protocol, id, and gateway. For example: `RTS_0100F2_0`.
@ -63,16 +64,16 @@ Assigning a name to a cover:
```yaml
# Example configuration.yaml entry
- platform: rflink
devices:
RTS_0100F2_0:
name: SunShade
bofumotor_455201_0f:
name: Sovrumsgardin
cover:
- platform: rflink
devices:
RTS_0100F2_0:
name: SunShade
bofumotor_455201_0f:
name: Sovrumsgardin
```
### Configuration variables:
Configuration variables:
- **automatic_add** (*Optional*): Automatically add new/unconfigured devices to Home Assistant if detected (default: True).
- **devices** (*Optional*): A list of devices with their name to use in the frontend.
@ -80,7 +81,7 @@ Assigning a name to a cover:
- **fire_event** (*Optional*): Set default `fire_event` for Rflink switch devices (see below).
- **signal_repetitions** (*Optional*): Set default `signal_repetitions` for Rflink switch devices (see below).
### Device configuration variables:
Device configuration variables:
- **name** (*Optional*): Name for the device, defaults to Rflink ID.
- **aliases** (*Optional*): Alternative Rflink ID's this device is known by.

View file

@ -96,7 +96,7 @@ homeassistant:
customize:
light.bedroom_light:
# Don't allow light.bedroom_light to be controlled by the emulated Hue bridge
emulated_hue: false
emulated_hue_hidden: false
light.office_light:
# Address light.office_light as "back office light"
emulated_hue_name: "back office light"
@ -104,7 +104,7 @@ homeassistant:
The following are attributes that can be applied in the `customize` section:
- **emulated_hue** (*Optional*): Whether or not the entity should be exposed by the emulated Hue bridge. The default value for this attribute is controlled by the `expose_by_default` option.
- **emulated_hue_hidden** (*Optional*): Whether or not the entity should be exposed by the emulated Hue bridge. Adding `emulated_hue_hidden: false` will expose the entity to Alexa. The default value for this attribute is controlled by the `expose_by_default` option.
- **emulated_hue_name** (*Optional*): The name that the emulated Hue will use. The default for this is the entity's friendly name.
### {% linkable_title Troubleshooting %}

View file

@ -18,7 +18,7 @@ You need the `ffmpeg` binary in your system path. On Debian 8 or Raspbian (Jessi
</p>
<p class='note'>
If you are using [Hass.io](/hassio/) then just move forward to the configuration as all requirements are already fullfilled.
If you are using [Hass.io](/hassio/) then just move forward to the configuration as all requirements are already fulfilled.
</p>
To set it up, add the following information to your `configuration.yaml` file:

View file

@ -15,7 +15,7 @@ ha_release: 0.47
[OpenCV](http://www.opencv.org) is an open source computer vision image and video processing library.
Some pre-defined classifiers can be found here: https://github.com/opencv/opencv/tree/master/data
Some pre-defined classifiers can be found [here](https://github.com/opencv/opencv/tree/master/data).
To setup OpenCV with Home Assistant, add the following section to your `configuration.yaml` file:
@ -34,7 +34,7 @@ image_processing:
- **source** array (*Required*): List of image sources.
- **entity_id** (*Required*): A camera entity id to get picture from.
- **name** (*Optional*): This parameter allows you to override the name of your `image_processing` entity.
- **classifier** (*Optional*): Dictionary of name to path to the classifier xml file. If this field is not provided, a face classifier will be downloaded from OpenCV's github repo.
- **classifier** (*Optional*): Dictionary of name to path to the classifier xml file. If this field is not provided, a face classifier will be downloaded from OpenCV's Github repo.
**classifier** may also be defined as a dictionary of names to classifier configurations:
@ -50,4 +50,4 @@ image_processing:
- **scale** (*Optional*): The scale to perform when processing, this is a `float` value that must be greater than or equal to `1.0`, default is `1.1`.
- **neighbors** (*Optional*): The minimum number of neighbors required for a match, default is `4`. The higher this number, the more picky the matching will be; lower the number, the more false positives you may experience.
If you would like to see the regions that OpenCV has detected, add this opencv camera to your config's custom_components/camera directory: [https://gist.github.com/Teagan42/bf4b941b34a79a3e184e149ff1efd82f](https://gist.github.com/Teagan42/bf4b941b34a79a3e184e149ff1efd82f)
If you would like to see the regions that OpenCV has detected, add this OpenCV camera to your config's `custom_components/camera` directory: [https://gist.github.com/Teagan42/bf4b941b34a79a3e184e149ff1efd82f](https://gist.github.com/Teagan42/bf4b941b34a79a3e184e149ff1efd82f)

View file

@ -9,7 +9,7 @@ sharing: true
footer: true
logo: home-assistant.png
ha_category: Automation
ha_release: TODO
ha_release: 0.55
---
The `input_datetime` component allows the user to define date and time values that can be controlled via the frontend and can be used within automations and templates.

View file

@ -10,6 +10,7 @@ footer: true
logo: home-assistant.png
ha_category: Automation
ha_release: 0.16
redirect_from: /components/input_slider/
---
The `input_number` component allows the user to define values that can be controlled via the frontend and can be used within conditions of automation. The frontend can display a slider, or a numeric input box. Changes to the slider or numeric input box generate state events. These state events can be utilized as `automation` triggers as well.
@ -43,6 +44,8 @@ Configuration variables:
- **initial** (*Optional*): Initial value when Home Assistant starts. Defaults to 0.
- **step** (*Optional*): Step value for the slider. Defaults to 1.
- **mode** (*Optional*): Can specify `box`, or `slider`. Defaults to `slider`.
- **unit_of_measurement** (*Optional*): Unit of measurement in which the value of the slider is expressed in.
- **icon** (*Optional*): Icon to display in front of the box/slider in the frontend. Refer to the [Customizing devices](https://home-assistant.io/docs/configuration/customizing-devices/#possible-values) page for possible values.
## {% linkable_title Automation Examples %}

View file

@ -33,7 +33,7 @@ Configuration variables:
- **devices** array (*Required*): A list of lights to use.
- **[mac address]** (*Required*): The bluetooth address of the switch.
- **name** (*Optional*): The custom name to use in the frontend.
- **api_key** (*Required*): The API key to acces the device.
- **api_key** (*Required*): The API key to access the device.
<p class='note'>
If you get an error looking like this:

View file

@ -46,7 +46,7 @@ Every time someone rings the bell, a `nello_bell_ring` event will be fired.
Field | Description
----- | -----------
`address` | Postal address of the lock.
`date` | Date when the event occured.
`date` | Date when the event occurred.
`description` | Human readable string describing the event.
`location_id` | Nello ID of the location where the bell has been rung.
`short_id` | Shorter Nello ID.

View file

@ -41,20 +41,29 @@ media_extractor:
music: bestaudio[ext=mp3]
```
This configuration sets query for all service calls like: ```{"entity_id": "media_player.my_sonos", "media_content_id": "https://soundcloud.com/bruttoband/brutto-11", "media_content_type": "music"}``` to 'bestaudio' with mp3 extention.
This configuration sets query for all service calls like to 'bestaudio' with the mp3 extension:
```json
{
"entity_id": "media_player.my_sonos",
"media_content_id": "https://soundcloud.com/bruttoband/brutto-11",
"media_content_type": "music"
}
```
Query examples with explanations:
* **bestvideo** - best video only stream
* **best** - best video + audio stream
* **bestaudio[ext=m4a]** - best audio stream with m4a extension
* **worst** - worst video + audio stream
* **bestaudio[ext=m4a]/bestaudio[ext=ogg]/bestaudio** - best m4a audio, otherwise best ogg audio and only then any best audio
* **bestvideo**: Best video only stream
* **best**: Best video + audio stream
* **bestaudio[ext=m4a]**: Best audio stream with m4a extension
* **worst**: Worst video + audio stream
* **bestaudio[ext=m4a]/bestaudio[ext=ogg]/bestaudio**: Best m4a audio, otherwise best ogg audio and only then any best audio
More info about queries [here](https://github.com/rg3/youtube-dl#format-selection)
### {% linkable_title Use the service %}
Go to the "Developer Tools," then to "Call Service," and choose `media_extractor/play_media` from the list of available services. Fill the "Service Data" field as shown in the example below and hit "CALL SERVICE."
Use <img src='/images/screenshots/developer-tool-services-icon.png' alt='service developer tool icon' class="no-shadow" height="38" /> **Services** from the **Developer Tools**. Choose `media_extractor` from the dropdown menu **Domain** and `play_media` from **Service**, enter something like the JSON sample from above into the **Service Data** field, and hit **CALL SERVICE**.
This will download the file from the given URL.

View file

@ -43,7 +43,7 @@ Configuration variables:
- **port** (*Optional*): The port number. Defaults to 80.
- **password** (*Optional*): PIN code of the Internet Radio. Defaults to 1234.
Some models use a seperate port (2244) for API access, this can be verified by visiting http://[host]:[port]/device.
Some models use a separate port (2244) for API access, this can be verified by visiting http://[host]:[port]/device.
In case your device (friendly name) is called *badezimmer*, an example automation can look something like this:

View file

@ -14,7 +14,7 @@ ha_release: 0.37
The [Discord service](https://discordapp.com/) is a platform for the notify component. This allows components to send messages to the user using Discord.
In order to get a token you need to go to the [Discord My Apps page](https://discordapp.com/developers/applications/me) and create a new application. Once the application is ready, create a [bot](https://discordapp.com/developers/docs/topics/oauth2#bots) user (**Create a Bot User**) and activate **Require OAuth2 Code Grant**. Retreive the **Client ID** and the (hidden) **Token** of your bot for later.
In order to get a token you need to go to the [Discord My Apps page](https://discordapp.com/developers/applications/me) and create a new application. Once the application is ready, create a [bot](https://discordapp.com/developers/docs/topics/oauth2#bots) user (**Create a Bot User**) and activate **Require OAuth2 Code Grant**. Retrieve the **Client ID** and the (hidden) **Token** of your bot for later.
When setting up the application you can use this [icon](https://home-assistant.io/demo/favicon-192x192.png).

View file

@ -2,7 +2,7 @@
layout: page
title: "Recorder"
description: "Instructions how to configure the data recorder for Home Assistant."
date: 2016-05-21 09:00
date: 2017-09-24 09:00
sidebar: true
comments: false
sharing: true
@ -27,7 +27,8 @@ recorder:
Configuration variables:
- **purge_days** (*Optional*): Delete events and states older than x days. The purge task runs every 2 days, starting from when Home Assistant is started if you restart your instance more frequently than the purge will never take place.
- **purge_interval** (*Optional*): (days) Enable scheduled purge of older events and states. The purge task runs every x days, starting from when Home Assistant is started. If you restart your instance more frequently, than the purge will never take place. You can use [service](#service) call `recorder.purge` when needed.
- **purge_keep_days** (*Required with `purge_interval`*): Specify number of history days to keep in recorder database after purge.
- **exclude** (*Optional*): Configure which components should be excluded from recordings.
- **entities** (*Optional*): The list of entity ids to be excluded from recordings.
- **domains** (*Optional*): The list of domains to be excluded from recordings.
@ -42,7 +43,8 @@ Define domains and entities to `exclude` (aka. blacklist). This is convenient wh
```yaml
# Example configuration.yaml entry with exclude
recorder:
purge_days: 5
purge_interval: 2
purge_keep_days: 5
db_url: sqlite:///home/user/.homeassistant/test
exclude:
domains:
@ -85,6 +87,19 @@ recorder:
If you only want to hide events from e.g. your history, take a look at the [`history` component](/components/history/). Same goes for logbook. But if you have privacy concerns about certain events or neither want them in history or logbook, you should use the `exclude`/`include` options of the `recorder` component, that they aren't even in your database. That way you can save storage and keep the database small by excluding certain often-logged events (like `sensor.last_boot`).
### {% linkable_title Service `purge` %}
Call the service `recorder.purge` to start purge task, which deletes events and states older than x days, according to `keep_days` service data (*Required*)
Automation [action](https://home-assistant.io/getting-started/automation-action/) example:
```yaml
action:
service: recorder.purge
data:
keep_days: 5
```
## Custom database engines
| Database engine | `db_url` |
@ -109,14 +124,14 @@ Not all Python bindings for the chosen database engine can be installed directly
For MariaDB you may have to install a few dependencies. On the Python side we use the `mysqlclient`:
```bash
$ sudo apt-get install libmariadbclient-dev
$ sudo apt-get install libmariadbclient-dev libssl-dev
$ pip3 install mysqlclient
```
For MySQL you may have to install a few dependencies. You can choose between `pymysql` and `mysqlclient`:
```bash
$ sudo apt-get install default-libmysqlclient-dev
$ sudo apt-get install default-libmysqlclient-dev libssl-dev
$ pip3 install mysqlclient
```

View file

@ -132,7 +132,7 @@ AQI | Status | Description
201 - 300 | **Very unhealthy** | Health warnings of emergency conditions. The entire population is more likely to be affected
301+ | **Hazardous** | Health alert: everyone may experience more serious health effects
### Air Polution Level
### Air Pollution Level
**Description:** This sensor displays the associated `Status` (from the above
table) for the current AQI.

View file

@ -70,7 +70,7 @@ $ python3
{'thing': 'ha-sensor', 'created': '2015-12-10T09:46:08.559Z', 'content': {'humiditiy': 81, 'temperature': 23}}
```
Recieve the latest dweet.
Receive the latest dweet.
```bash
>>> dweepy.get_latest_dweet_for('ha-sensor')

View file

@ -17,7 +17,7 @@ The `vera` platform allows you to get data from your [Vera](http://getvera.com/)
They will be automatically discovered if the vera component is loaded.
Please note that some vera sensors (such as _motion_ and _flood_ sensors) are _armable_ which means that vera will send alerts (email messages ot txts) when they are _armed_ an change state.
Please note that some vera sensors (such as _motion_ and _flood_ sensors) are _armable_ which means that vera will send alerts (email messages to txts) when they are _armed_ an change state.
Home Assistant will display the state of these sensors regardless of the _armed_ state.

View file

@ -21,7 +21,7 @@ The Things network support various integrations to make the data available:
|---|---|
| [MQTT](https://www.thethingsnetwork.org/docs/applications/mqtt/) | |
| [Storage](https://www.thethingsnetwork.org/docs/applications/storage/) | [`thethingsnetwork`](/component/sensor.thethingsnetwork/) |
| [HTTP](https://www.thethingsnetwork.org/docs/applications/http/} | |
| [HTTP](https://www.thethingsnetwork.org/docs/applications/http/) | |
### {% linkable_title Setup %}

View file

@ -29,7 +29,7 @@ Configuration variables:
- **language** (*Optional*): The language to use. Defaults to `en-US`. Supported `en-US`, `ru-RU`, `uk-UK`, `tr-TR`.
- **codec** (*Optional*): Audio codec. Default is `mp3`. Supported us `mp3`, `wav`, `opus`.
- **voice** (*Optional*): Speaker voice. Default is `zahar`. Supported female voices are `jane`, `oksana`, `alyss`, `omazh` and male voices are `zahar` and `ermil`.
- **emotion** (*Optional*): Speaker emotional intonation. Default is `neutral`. Also supported are `good` (freindly) and `evil` (angry)
- **emotion** (*Optional*): Speaker emotional intonation. Default is `neutral`. Also supported are `good` (friendly) and `evil` (angry)
- **speed** (*Optional*): Speech speed. Default value is `1`. Highest speed is `3` and lowest `0,1`
Please check the [API documentation](https://tech.yandex.com/speechkit/cloud/doc/guide/concepts/tts-http-request-docpage/) for details. It seems that the English version of documentation is outdated. You could request an API key [by email](https://tech.yandex.com/speechkit/cloud/) or [online](https://developer.tech.yandex.ru/).

View file

@ -23,13 +23,34 @@ To integrate this into Home Assistant, add the following section to your `config
```yaml
# Example configuration.yaml entry with custom external portal
upnp:
external_port: 80
ports:
hass: 8000
8080: 8080
```
If you which to have the statistics without having port mapping done through IGD, add the option **port_mapping**.
Configuration variables:
- **external_port** (*Optional*): Expose Home Assistant to the internet over this TCP port. Defaults to Home Assistant configured port.
- **port_mapping** (*Optional*): Disables port mapping maintains the network statistics sensors)
- **unit** (*Optional*): UPnP sensors unit. Valid units are 'Bytes', 'KBytes', 'MBytes' and 'GBytes'.
{% configuration binary_sensor.template %}
ports:
description: Map of ports to map from internal to external. Pass 'hass' as internal port to use the port Home Assistant runs on.
required: false
type: map
default: open same port on external router as that HASS runs locally and forwards it.
port_mapping:
description: If the component should try to map ports.
required: false
type: boolean
default: false
units:
description: Define the units used for the UPNP sensor. Possible values are Bytes, KBytes, MBytes, GBytes.
required: false
type: string
default: Mbytes
local_ip:
description: The local IP address of the computer running Home Assistant.
required: false
type: string
default: Try to auto-detect IP of host.
{% endconfiguration %}