public class DeepLearningTask extends FrameTask<DeepLearningTask>
FrameTask.ExtractDenseRow_dinfo, _jobKey, _shuffle, _useFraction| Constructor and Description |
|---|
DeepLearningTask(water.Key jobKey,
DeepLearningModelInfo inputModel,
float fraction,
int iteration)
The only constructor
|
| Modifier and Type | Method and Description |
|---|---|
void |
applyMiniBatchUpdate(int n)
Apply the gradient to update the weights
|
static void |
applyModelUpdates(Neurons[] neurons)
Helper to apply back-propagation without clearing out the gradients afterwards
Used for gradient checking
|
protected void |
chunkDone(long n)
After each chunk, add the number of processed rows to the counter
|
protected boolean |
chunkInit()
Override this to initialize at the beginning of chunk processing.
|
protected int |
getMiniBatchSize()
Return the mini-batch size
Note: If this is overridden, then applyMiniBatch must be overridden as well to perform the model/weight mini-batch update
|
static Neurons[] |
makeNeuronsForTesting(DeepLearningModelInfo minfo) |
static Neurons[] |
makeNeuronsForTraining(DeepLearningModelInfo minfo) |
DeepLearningModelInfo |
model_info()
Accessor to the object containing the (final) state of the Deep Learning model
Should only be queried after calling this.doAll(Frame training)
|
protected void |
postGlobal()
After all reduces are done, the driver node calls this method to clean up
This is only needed if we're not inside a DeepLearningTask2 (which will do the reduction between replicated data workers).
|
protected void |
postLocal()
After all maps are done on a node, this is called to store the per-node model into DKV (for elastic averaging)
Otherwise, do nothing.
|
void |
processRow(long seed,
DataInfo.Row r)
Process one training row at a time (online learning)
|
void |
reduce(DeepLearningTask other)
Average the per-node models (for elastic averaging, already wrote them to DKV in postLocal())
This is a no-op between F/J worker threads (operate on the same weights/biases)
|
protected void |
setupLocal()
Transfer ownership from global (shared) model to local model which will be worked on
|
static void |
step(long seed,
Neurons[] neurons,
DeepLearningModelInfo minfo,
DeepLearningModelInfo consensus_minfo,
boolean training,
double[] responses,
double offset)
Forward propagation
assumption: layer 0 has _a filled with (horizontalized categoricals) double values
|
closeLocal, dinfo, map, processRowappendables, asyncExec, asyncExec, asyncExec, asyncExec, asyncExecOnAllNodes, block, compute2, dfork, dfork, dfork, dfork, dinvoke, doAll, doAll, doAll, doAll, doAll, doAll, doAll, doAll, doAll, doAllNodes, getResult, isReleasable, map, map, map, map, map, map, map, map, map, map, map, map, onCompletion, onExceptionalCompletion, outputFrame, outputFrame, outputFrame, priority, profString, self, setProfilecopyOver, getDException, hasException, logVerbose, onAck, onAckAck, setExceptionclone, compute, frozenType, icer, nextThrPriority, read_impl, read, readJSON_impl, readJSON, write_impl, write, writeJSON_impl, writeJSONaddToPendingCount, compareAndSetPendingCount, complete, exec, getCompleter, getPendingCount, getRawResult, setCompleter, setPendingCount, setRawResult, tryCompleteadapt, adapt, adapt, cancel, compareAndSetForkJoinTaskTag, completeExceptionally, fork, get, get, getException, getForkJoinTaskTag, getPool, getQueuedTaskCount, getSurplusQueuedTaskCount, helpQuiesce, inForkJoinPool, invoke, invokeAll, invokeAll, invokeAll, isCancelled, isCompletedAbnormally, isCompletedNormally, isDone, join, peekNextLocalTask, pollNextLocalTask, pollTask, quietlyComplete, quietlyInvoke, quietlyJoin, reinitialize, setForkJoinTaskTag, tryUnforkpublic DeepLearningTask(water.Key jobKey,
DeepLearningModelInfo inputModel,
float fraction,
int iteration)
jobKey - inputModel - Initial model statefraction - Fraction of rows of the training to train withiteration - public final DeepLearningModelInfo model_info()
protected void setupLocal()
setupLocal in class FrameTask<DeepLearningTask>protected boolean chunkInit()
FrameTaskchunkInit in class FrameTask<DeepLearningTask>public final void processRow(long seed,
DataInfo.Row r)
processRow in class FrameTask<DeepLearningTask>seed - Seed is only used if reproducible mode is enabledr - Row (must be dense for now)public void applyMiniBatchUpdate(int n)
applyMiniBatchUpdate in class FrameTask<DeepLearningTask>n - number of trained examples in this last mini batchpublic static void applyModelUpdates(Neurons[] neurons)
neurons - protected int getMiniBatchSize()
FrameTaskgetMiniBatchSize in class FrameTask<DeepLearningTask>protected void chunkDone(long n)
chunkDone in class FrameTask<DeepLearningTask>n - Number of processed rowsprotected void postLocal()
postLocal in class water.MRTask<DeepLearningTask>public void reduce(DeepLearningTask other)
reduce in class water.MRTask<DeepLearningTask>other - protected void postGlobal()
postGlobal in class water.MRTask<DeepLearningTask>public static Neurons[] makeNeuronsForTraining(DeepLearningModelInfo minfo)
public static Neurons[] makeNeuronsForTesting(DeepLearningModelInfo minfo)
public static void step(long seed,
Neurons[] neurons,
DeepLearningModelInfo minfo,
DeepLearningModelInfo consensus_minfo,
boolean training,
double[] responses,
double offset)
seed - neurons - minfo - consensus_minfo - training - responses - Standardized response(s)