Skip to content

Commit 2ae4bdc

Browse files
committed
updated readme
1 parent 6bcdaf4 commit 2ae4bdc

File tree

2 files changed

+14
-12
lines changed

2 files changed

+14
-12
lines changed

README.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -61,14 +61,16 @@ After downloading the Opaque codebase, build and test it as follows.
6161

6262
Next, run Apache Spark SQL queries with Opaque as follows, assuming [Spark 3.0](https://www.apache.org/dyn/closer.lua/spark/spark-3.0.1/spark-3.0.1-bin-hadoop2.7.tgz) (`wget http://apache.mirrors.pair.com/spark/spark-3.0.1/spark-3.0.1-bin-hadoop2.7.tgz`) is already installed:
6363

64-
1. Package Opaque into a JAR:
64+
\* Opaque needs Spark's `'spark.executor.instances'` property to be set. This can be done in a custom config file, the default config file found at `/opt/spark/conf/spark-defaults.conf`, or as a `spark-submit` or `spark-shell` argument: `--conf 'spark.executor.instances=<value>`.
65+
66+
2. Package Opaque into a JAR:
6567
6668
```sh
6769
cd ${OPAQUE_HOME}
6870
build/sbt package
6971
```
7072

71-
2. Launch the Spark shell with Opaque:
73+
3. Launch the Spark shell with Opaque:
7274

7375
```sh
7476
${SPARK_HOME}/bin/spark-shell --jars ${OPAQUE_HOME}/target/scala-2.12/opaque_2.12-0.1.jar
@@ -81,23 +83,23 @@ Next, run Apache Spark SQL queries with Opaque as follows, assuming [Spark 3.0](
8183
JVM_OPTS="-Xmx4G" build/sbt console
8284
```
8385

84-
3. Inside the Spark shell, import Opaque's DataFrame methods and install Opaque's query planner rules:
86+
4. Inside the Spark shell, import Opaque's DataFrame methods and install Opaque's query planner rules:
8587

8688
```scala
8789
import edu.berkeley.cs.rise.opaque.implicits._
8890
8991
edu.berkeley.cs.rise.opaque.Utils.initSQLContext(spark.sqlContext)
9092
```
9193

92-
4. Create an encrypted DataFrame:
94+
5. Create an encrypted DataFrame:
9395

9496
```scala
9597
val data = Seq(("foo", 4), ("bar", 1), ("baz", 5))
9698
val df = spark.createDataFrame(data).toDF("word", "count")
9799
val dfEncrypted = df.encrypted
98100
```
99101

100-
5. Query the DataFrames and explain the query plan to see the secure operators:
102+
6. Query the DataFrames and explain the query plan to see the secure operators:
101103

102104

103105
```scala
@@ -117,7 +119,7 @@ Next, run Apache Spark SQL queries with Opaque as follows, assuming [Spark 3.0](
117119
// +----+-----+
118120
```
119121

120-
6. Save and load an encrypted DataFrame:
122+
7. Save and load an encrypted DataFrame:
121123

122124
```scala
123125
dfEncrypted.write.format("edu.berkeley.cs.rise.opaque.EncryptedSource").save("dfEncrypted")

src/main/scala/edu/berkeley/cs/rise/opaque/RA.scala

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -28,13 +28,13 @@ import edu.berkeley.cs.rise.opaque.execution.SP
2828
object RA extends Logging {
2929
def initRA(sc: SparkContext): Unit = {
3030

31-
// This is required in order for all executors
32-
// to have time to start up. Otherwise, getExecutorMemoryStatus
33-
// below will not be initialized to the correct value and
34-
// all enclaves won't be attested successfully
35-
Thread.sleep(5000)
31+
// All executors need to be initialized before attestation can occur
32+
var numExecutors = 1
33+
if (!sc.isLocal) {
34+
numExecutors = sc.getConf.getInt("spark.executor.instances", -1)
35+
while (!sc.isLocal && sc.getExecutorMemoryStatus.size < numExecutors) {}
36+
}
3637

37-
val numExecutors = sc.getExecutorMemoryStatus.size
3838
val rdd = sc.parallelize(Seq.fill(numExecutors) {()}, numExecutors)
3939
val intelCert = Utils.findResource("AttestationReportSigningCACert.pem")
4040
val sp = new SP()

0 commit comments

Comments
 (0)