Package

datafu

spark

Permalink

package spark

Visibility
  1. Public
  2. All

Type Members

  1. class CoreBridgeDirectory extends PythonResource

    Permalink

    Contains all python files needed by the bridge itself

  2. abstract class PythonResource extends AnyRef

    Permalink

    Represents a resource that needs to be added to PYTHONPATH used by ScalaPythonBridge.

    Represents a resource that needs to be added to PYTHONPATH used by ScalaPythonBridge.

    To ensure your python resources (modules, files, etc.) are properly added to the bridge, do the following: 1) Put all the resource under some root directory with a unique name x, and make sure path/to/x is visible to the class loader (usually just use src/main/resources/x). 2) Extend this class like this: class MyResource extends PythonResource("x") This assumes x is under src/main/resources/x 3) (since we use ServiceLoader) Add a file to your jar/project: META-INF/services/spark.utils.PythonResource with a single line containing the full name (including package) of MyResource.

    This process involves scanning the entire jar and copying files from the jar to some temporary location, so if your jar is really big consider putting the resources in a smaller jar.

  3. case class ScalaPythonBridgeRunner(extraPath: String = "") extends Product with Serializable

    Permalink

    this class let's the user invoke PySpark code from scala example usage:

    this class let's the user invoke PySpark code from scala example usage:

    val runner = ScalaPythonBridgeRunner() runner.runPythonFile("my_package/my_pyspark_logic.py")

  4. class SparkDFUtilsBridge extends AnyRef

    Permalink

    class definition so we could expose this functionality in PySpark

Value Members

  1. object DataFrameOps

    Permalink

    implicit class to enable easier usage e.g:

    implicit class to enable easier usage e.g:

    df.dedup(..)

    instead of:

    SparkDFUtils.dedup(...)

  2. object PythonPathsManager

    Permalink

    There are two phases of resolving python files path:

    There are two phases of resolving python files path:

    1) When launching spark: the files need to be added to spark.executorEnv.PYTHONPATH

    2) When executing python file via bridge: the files need to be added to the process PYTHONPATH. This is different than the previous phase because this python process is spawned by datafu-spark, not by spark, and always on the driver.

  3. object ResourceCloning

    Permalink

    Utility for extracting resource from a jar and copy it to a temporary location

  4. object ScalaPythonBridge

    Permalink

    Do not instantiate this class! Use the companion object instead.

    Do not instantiate this class! Use the companion object instead. This class should only be used by python

  5. object SparkDFUtils

    Permalink
  6. object SparkUDAFs

    Permalink

Ungrouped