Skip to content

Docs: Clean up for asciidoctor #1275

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 8, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/src/reference/asciidoc/core/configuration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -554,12 +554,16 @@ added[2.1]

added[2.2]
`es.net.proxy.https.host`:: Https proxy host name

added[2.2]
`es.net.proxy.https.port`:: Https proxy port

added[2.2]
`es.net.proxy.https.user`:: Https proxy user name

added[2.2]
`es.net.proxy.https.pass`:: <<keystore,Securable>>. Https proxy password

added[2.2]
`es.net.proxy.https.use.system.props`(default yes):: Whether the use the system Https proxy properties (namely `https.proxyHost` and `https.proxyPort`) or not

Expand Down
2 changes: 1 addition & 1 deletion docs/src/reference/asciidoc/core/pig.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ For example:
[source,sql]
----
STORE B INTO '...' USING org.elasticsearch.hadoop.pig.EsStorage(
'es.mapping.names=date:@timestamp, uRL:url') <1>
'es.mapping.names=date:@timestamp, uRL:url') <1>
----

<1> Pig column `date` mapped in {es} to `@timestamp`; Pig column `uRL` mapped in {es} to `url`
Expand Down
4 changes: 2 additions & 2 deletions docs/src/reference/asciidoc/core/spark.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,7 @@ saveToEs(javaRDD, "my-collection-{media_type}/doc"); <1>
[[spark-write-meta]]
==== Handling document metadata

{es} allows each document to have its own http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/\_document\_metadata.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
{es} allows each document to have its own {ref}/mapping-fields.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
In other words, for ++RDD++s containing a key-value tuple, the metadata can be extracted from the key and the value used as the document source.

The metadata is described through the +Metadata+ Java http://docs.oracle.com/javase/tutorial/java/javaOO/enum.html[enum] within +org.elasticsearch.spark.rdd+ package which identifies its type - +id+, +ttl+, +version+, etc...
Expand Down Expand Up @@ -922,7 +922,7 @@ jssc.start();
[[spark-streaming-write-meta]]
==== Handling document metadata

{es} allows each document to have its own http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/\_document\_metadata.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
{es} allows each document to have its own {ref}/mapping-fields.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].

This is no different in Spark Streaming. For ++DStreams++s containing a key-value tuple, the metadata can be extracted from the key and the value used as the document source.

Expand Down