Skip to content

Commit 3e68b00

Browse files
committed
Docs: Clean up for asciidoctor (#1275)
This makes three changes in preparation for switching the docs to Asciidoctor: 1. Fixes a broken link. As a side effect this fixes a missing emphasis in Asciidoctor that was caused by parsing issues with the `_` in the old link. 2. Fixes an `added` macro that renders "funny" in Asciidoctor. 3. Replace a tab in a code example with spaces. AsciiDoc was doing this automatically but Asciidoctor preserves the tab. We don't need the tab.
1 parent 69e6a7c commit 3e68b00

File tree

3 files changed

+7
-3
lines changed

3 files changed

+7
-3
lines changed

docs/src/reference/asciidoc/core/configuration.adoc

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -538,12 +538,16 @@ added[2.1]
538538

539539
added[2.2]
540540
`es.net.proxy.https.host`:: Https proxy host name
541+
541542
added[2.2]
542543
`es.net.proxy.https.port`:: Https proxy port
544+
543545
added[2.2]
544546
`es.net.proxy.https.user`:: Https proxy user name
547+
545548
added[2.2]
546549
`es.net.proxy.https.pass`:: <<keystore,Securable>>. Https proxy password
550+
547551
added[2.2]
548552
`es.net.proxy.https.use.system.props`(default yes):: Whether the use the system Https proxy properties (namely `https.proxyHost` and `https.proxyPort`) or not
549553

docs/src/reference/asciidoc/core/pig.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ For example:
164164
[source,sql]
165165
----
166166
STORE B INTO '...' USING org.elasticsearch.hadoop.pig.EsStorage(
167-
'es.mapping.names=date:@timestamp, uRL:url') <1>
167+
'es.mapping.names=date:@timestamp, uRL:url') <1>
168168
----
169169

170170
<1> Pig column `date` mapped in {es} to `@timestamp`; Pig column `uRL` mapped in {es} to `url`

docs/src/reference/asciidoc/core/spark.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -294,7 +294,7 @@ saveToEs(javaRDD, "my-collection-{media_type}/doc"); <1>
294294
[[spark-write-meta]]
295295
==== Handling document metadata
296296

297-
{es} allows each document to have its own http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/\_document\_metadata.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
297+
{es} allows each document to have its own {ref}/mapping-fields.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
298298
In other words, for ++RDD++s containing a key-value tuple, the metadata can be extracted from the key and the value used as the document source.
299299

300300
The metadata is described through the +Metadata+ Java http://docs.oracle.com/javase/tutorial/java/javaOO/enum.html[enum] within +org.elasticsearch.spark.rdd+ package which identifies its type - +id+, +ttl+, +version+, etc...
@@ -922,7 +922,7 @@ jssc.start();
922922
[[spark-streaming-write-meta]]
923923
==== Handling document metadata
924924

925-
{es} allows each document to have its own http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/\_document\_metadata.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
925+
{es} allows each document to have its own {ref}/mapping-fields.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
926926

927927
This is no different in Spark Streaming. For ++DStreams++s containing a key-value tuple, the metadata can be extracted from the key and the value used as the document source.
928928

0 commit comments

Comments
 (0)