Remove Neural Networks samples.
This API is deprecated in favor of TFLite: https://developer.android.com/ndk/guides/neuralnetworks/migration-guide. TFLite has their own docs and samples, and isn't an NDK API anyway so we don't need to replace these samples with TFLite samples. Just delete the thing we're recommending against so people don't get confused into following bad advice.
9
nn-samples/.gitignore
vendored
@@ -1,9 +0,0 @@
|
||||
*.iml
|
||||
.gradle
|
||||
/local.properties
|
||||
/.idea
|
||||
.DS_Store
|
||||
/build
|
||||
/captures
|
||||
.externalNativeBuild
|
||||
.cxx
|
||||
@@ -1,59 +0,0 @@
|
||||
# Android Neural Networks API Sample
|
||||
|
||||
The samples demonstrate how to use Android NNAPI exported through Android NDK:
|
||||
|
||||
- basic: showcase the main NNAPI concept from Android 8
|
||||
- sequence: showcase the advanced features added in Android 11
|
||||
|
||||
Check each module's README.md for additional descriptions and additional
|
||||
requirements.
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
- Android Studio 4.0+.
|
||||
- NDK r16+.
|
||||
- Android API 27+.
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. [Download Android Studio](http://developer.android.com/sdk/index.html)
|
||||
1. Launch Android Studio.
|
||||
1. Open the sample directory.
|
||||
1. Click *Tools/Android/Sync Project with Gradle Files*.
|
||||
1. Click *Run/Run 'app'*.
|
||||
|
||||
## Screenshots
|
||||
|
||||
<img src="basic/screenshot.png" width="360">
|
||||
<img src="sequence/screenshot.png" width="360">
|
||||
|
||||
## Support
|
||||
|
||||
If you've found an error in these samples, please
|
||||
[file an issue](https://github.com/android/ndk-samples/issues/new).
|
||||
|
||||
Patches are encouraged, and may be submitted by
|
||||
[forking this project](https://github.com/android/ndk-samples/fork) and
|
||||
submitting a pull request through GitHub. Please see
|
||||
[CONTRIBUTING.md](../CONTRIBUTING.md) for more details.
|
||||
|
||||
- [Stack Overflow](http://stackoverflow.com/questions/tagged/android-ndk)
|
||||
- [Android Tools Feedbacks](http://tools.android.com/feedback)
|
||||
|
||||
## License
|
||||
|
||||
Copyright 2020 Google LLC
|
||||
|
||||
Licensed to the Apache Software Foundation (ASF) under one or more contributor
|
||||
license agreements. See the NOTICE file distributed with this work for
|
||||
additional information regarding copyright ownership. The ASF licenses this file
|
||||
to you under the Apache License, Version 2.0 (the "License"); you may not use
|
||||
this file except in compliance with the License. You may obtain a copy of the
|
||||
License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed
|
||||
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
|
||||
CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
1
nn-samples/basic/.gitignore
vendored
@@ -1 +0,0 @@
|
||||
/build
|
||||
@@ -1,44 +0,0 @@
|
||||
# Android Neural Networks API Sample: Basic
|
||||
|
||||
Android Neural Networks API (NN API) Sample demonstrates basic usages of NN API
|
||||
with a simple model that consists of three operations: two additions and a
|
||||
multiplication.
|
||||
|
||||
The sums created by the additions are the inputs to the multiplication. In
|
||||
essence, we are creating a graph that computes: (tensor0 + tensor1) * (tensor2 +
|
||||
tensor3).
|
||||
|
||||
```java
|
||||
tensor0 ---+
|
||||
+--- ADD ---> intermediateOutput0 ---+
|
||||
tensor1 ---+ |
|
||||
+--- MUL---> output
|
||||
tensor2 ---+ |
|
||||
+--- ADD ---> intermediateOutput1 ---+
|
||||
tensor3 ---+
|
||||
```
|
||||
|
||||
Two of the four tensors, tensor0 and tensor2 being added are constants, defined
|
||||
in the model. They represent the weights that would have been learned during a
|
||||
training process, loaded from model_data.bin.
|
||||
|
||||
The other two tensors, tensor1 and tensor3 will be inputs to the model. Their
|
||||
values will be provided when we execute the model. These values can change from
|
||||
execution to execution.
|
||||
|
||||
Besides the two input tensors, an optional fused activation function can also be
|
||||
defined for ADD and MUL. In this example, we'll simply set it to NONE.
|
||||
|
||||
The model then has 8 operands:
|
||||
|
||||
- 2 tensors that are inputs to the model. These are fed to the two ADD
|
||||
operations.
|
||||
- 2 constant tensors that are the other two inputs to the ADD operations.
|
||||
- 1 fuse activation operand reused for the ADD operations and the MUL operation.
|
||||
- 2 intermediate tensors, representing outputs of the ADD operations and inputs
|
||||
to the MUL operation.
|
||||
- 1 model output.
|
||||
|
||||
## Screenshots
|
||||
|
||||
<img src="screenshot.png" width="480">
|
||||
@@ -1,35 +0,0 @@
|
||||
plugins {
|
||||
id "ndksamples.android.application"
|
||||
id 'ndksamples.android.kotlin'
|
||||
}
|
||||
|
||||
android {
|
||||
namespace 'com.example.android.basic'
|
||||
|
||||
defaultConfig {
|
||||
applicationId "com.example.android.basic"
|
||||
minSdkVersion 27
|
||||
versionCode 1
|
||||
versionName "1.0"
|
||||
}
|
||||
|
||||
externalNativeBuild {
|
||||
cmake {
|
||||
path "src/main/cpp/CMakeLists.txt"
|
||||
}
|
||||
}
|
||||
|
||||
buildFeatures {
|
||||
viewBinding true
|
||||
}
|
||||
|
||||
androidResources {
|
||||
noCompress 'bin'
|
||||
}
|
||||
}
|
||||
|
||||
dependencies {
|
||||
implementation libs.androidx.constraintlayout
|
||||
implementation libs.kotlinx.coroutines.core
|
||||
implementation libs.kotlinx.coroutines.android
|
||||
}
|
||||
21
nn-samples/basic/proguard-rules.pro
vendored
@@ -1,21 +0,0 @@
|
||||
# Add project specific ProGuard rules here.
|
||||
# You can control the set of applied configuration files using the
|
||||
# proguardFiles setting in build.gradle.
|
||||
#
|
||||
# For more details, see
|
||||
# http://developer.android.com/guide/developing/tools/proguard.html
|
||||
|
||||
# If your project uses WebView with JS, uncomment the following
|
||||
# and specify the fully qualified class name to the JavaScript interface
|
||||
# class:
|
||||
#-keepclassmembers class fqcn.of.javascript.interface.for.webview {
|
||||
# public *;
|
||||
#}
|
||||
|
||||
# Uncomment this to preserve the line number information for
|
||||
# debugging stack traces.
|
||||
#-keepattributes SourceFile,LineNumberTable
|
||||
|
||||
# If you keep the line number information, uncomment this to
|
||||
# hide the original source file name.
|
||||
#-renamesourcefileattribute SourceFile
|
||||
|
Before Width: | Height: | Size: 66 KiB |
@@ -1,20 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
|
||||
|
||||
<application
|
||||
android:allowBackup="true"
|
||||
android:icon="@mipmap/ic_launcher"
|
||||
android:label="@string/app_name"
|
||||
android:roundIcon="@mipmap/ic_launcher_round"
|
||||
android:supportsRtl="true"
|
||||
android:theme="@style/AppTheme">
|
||||
<activity android:name=".MainActivity"
|
||||
android:exported="true">
|
||||
<intent-filter>
|
||||
<action android:name="android.intent.action.MAIN" />
|
||||
|
||||
<category android:name="android.intent.category.LAUNCHER" />
|
||||
</intent-filter>
|
||||
</activity>
|
||||
</application>
|
||||
</manifest>
|
||||
@@ -1,15 +0,0 @@
|
||||
cmake_minimum_required(VERSION 3.22.1)
|
||||
project(NnSamplesBasic LANGUAGES CXX)
|
||||
|
||||
add_library(basic
|
||||
SHARED
|
||||
nn_sample.cpp
|
||||
simple_model.cpp
|
||||
)
|
||||
|
||||
target_link_libraries(basic
|
||||
# Link with libneuralnetworks.so for NN API
|
||||
neuralnetworks
|
||||
android
|
||||
log
|
||||
)
|
||||
@@ -1,75 +0,0 @@
|
||||
/**
|
||||
* Copyright 2017 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#include <android/asset_manager_jni.h>
|
||||
#include <android/log.h>
|
||||
#include <android/sharedmem.h>
|
||||
#include <fcntl.h>
|
||||
#include <jni.h>
|
||||
#include <sys/mman.h>
|
||||
|
||||
#include <iomanip>
|
||||
#include <sstream>
|
||||
#include <string>
|
||||
|
||||
#include "simple_model.h"
|
||||
|
||||
extern "C" JNIEXPORT jlong JNICALL
|
||||
Java_com_example_android_basic_MainActivity_initModel(JNIEnv* env,
|
||||
jobject /* this */,
|
||||
jobject _assetManager,
|
||||
jstring _assetName) {
|
||||
// Get the file descriptor of the model data file.
|
||||
AAssetManager* assetManager = AAssetManager_fromJava(env, _assetManager);
|
||||
const char* assetName = env->GetStringUTFChars(_assetName, NULL);
|
||||
AAsset* asset =
|
||||
AAssetManager_open(assetManager, assetName, AASSET_MODE_BUFFER);
|
||||
if (asset == nullptr) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"Failed to open the asset.");
|
||||
return 0;
|
||||
}
|
||||
env->ReleaseStringUTFChars(_assetName, assetName);
|
||||
SimpleModel* nn_model = new SimpleModel(asset);
|
||||
AAsset_close(asset);
|
||||
if (!nn_model->CreateCompiledModel()) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"Failed to prepare the model.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
return (jlong)(uintptr_t)nn_model;
|
||||
}
|
||||
|
||||
extern "C" JNIEXPORT jfloat JNICALL
|
||||
Java_com_example_android_basic_MainActivity_startCompute(JNIEnv* env,
|
||||
jobject /* this */,
|
||||
jlong _nnModel,
|
||||
jfloat inputValue1,
|
||||
jfloat inputValue2) {
|
||||
SimpleModel* nn_model = (SimpleModel*)_nnModel;
|
||||
float result = 0.0f;
|
||||
nn_model->Compute(inputValue1, inputValue2, &result);
|
||||
return result;
|
||||
}
|
||||
|
||||
extern "C" JNIEXPORT void JNICALL
|
||||
Java_com_example_android_basic_MainActivity_destroyModel(JNIEnv* env,
|
||||
jobject /* this */,
|
||||
jlong _nnModel) {
|
||||
SimpleModel* nn_model = (SimpleModel*)_nnModel;
|
||||
delete (nn_model);
|
||||
}
|
||||
@@ -1,556 +0,0 @@
|
||||
/**
|
||||
* Copyright 2017 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
#include "simple_model.h"
|
||||
|
||||
#include <android/asset_manager_jni.h>
|
||||
#include <android/log.h>
|
||||
#include <android/sharedmem.h>
|
||||
#include <sys/mman.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include <string>
|
||||
|
||||
namespace {
|
||||
|
||||
// Create ANeuralNetworksMemory from an asset file.
|
||||
//
|
||||
// Note that, at API level 30 or earlier, the NNAPI drivers may not have the
|
||||
// permission to access the asset file. To work around this issue, here we will:
|
||||
// 1. Allocate a large-enough shared memory to hold the model data;
|
||||
// 2. Copy the asset file to the shared memory;
|
||||
// 3. Create the NNAPI memory with the file descriptor of the shared memory.
|
||||
ANeuralNetworksMemory* createMemoryFromAsset(AAsset* asset) {
|
||||
// Allocate a large-enough shared memory to hold the model data.
|
||||
off_t length = AAsset_getLength(asset);
|
||||
int fd = ASharedMemory_create("model_data", length);
|
||||
if (fd < 0) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ASharedMemory_create failed with size %d", length);
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
// Copy the asset file to the shared memory.
|
||||
void* data = mmap(nullptr, length, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
|
||||
if (data == nullptr) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"Failed to map a shared memory");
|
||||
close(fd);
|
||||
return nullptr;
|
||||
}
|
||||
AAsset_read(asset, data, length);
|
||||
munmap(data, length);
|
||||
|
||||
// Create the NNAPI memory with the file descriptor of the shared memory.
|
||||
ANeuralNetworksMemory* memory;
|
||||
int status = ANeuralNetworksMemory_createFromFd(
|
||||
length, PROT_READ | PROT_WRITE, fd, 0, &memory);
|
||||
|
||||
// It is safe to close the file descriptor here because
|
||||
// ANeuralNetworksMemory_createFromFd will create a dup.
|
||||
close(fd);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemory_createFromFd failed for trained weights");
|
||||
return nullptr;
|
||||
}
|
||||
return memory;
|
||||
}
|
||||
|
||||
} // namespace
|
||||
|
||||
/**
|
||||
* SimpleModel Constructor.
|
||||
*
|
||||
* Initialize the member variables, including the shared memory objects.
|
||||
*/
|
||||
SimpleModel::SimpleModel(AAsset* asset)
|
||||
: model_(nullptr), compilation_(nullptr), dimLength_(TENSOR_SIZE) {
|
||||
tensorSize_ = dimLength_;
|
||||
inputTensor1_.resize(tensorSize_);
|
||||
|
||||
// Create ANeuralNetworksMemory from a file containing the trained data.
|
||||
memoryModel_ = createMemoryFromAsset(asset);
|
||||
|
||||
// Create ASharedMemory to hold the data for the second input tensor and
|
||||
// output output tensor.
|
||||
inputTensor2Fd_ = ASharedMemory_create("input2", tensorSize_ * sizeof(float));
|
||||
outputTensorFd_ = ASharedMemory_create("output", tensorSize_ * sizeof(float));
|
||||
|
||||
// Create ANeuralNetworksMemory objects from the corresponding ASharedMemory
|
||||
// objects.
|
||||
int status =
|
||||
ANeuralNetworksMemory_createFromFd(tensorSize_ * sizeof(float), PROT_READ,
|
||||
inputTensor2Fd_, 0, &memoryInput2_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemory_createFromFd failed for Input2");
|
||||
return;
|
||||
}
|
||||
status = ANeuralNetworksMemory_createFromFd(
|
||||
tensorSize_ * sizeof(float), PROT_READ | PROT_WRITE, outputTensorFd_, 0,
|
||||
&memoryOutput_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemory_createFromFd failed for Output");
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a graph that consists of three operations: two additions and a
|
||||
* multiplication.
|
||||
* The sums created by the additions are the inputs to the multiplication. In
|
||||
* essence, we are creating a graph that computes:
|
||||
* (tensor0 + tensor1) * (tensor2 + tensor3).
|
||||
*
|
||||
* tensor0 ---+
|
||||
* +--- ADD ---> intermediateOutput0 ---+
|
||||
* tensor1 ---+ |
|
||||
* +--- MUL---> output
|
||||
* tensor2 ---+ |
|
||||
* +--- ADD ---> intermediateOutput1 ---+
|
||||
* tensor3 ---+
|
||||
*
|
||||
* Two of the four tensors, tensor0 and tensor2 being added are constants,
|
||||
* defined in the model. They represent the weights that would have been learned
|
||||
* during a training process.
|
||||
*
|
||||
* The other two tensors, tensor1 and tensor3 will be inputs to the model. Their
|
||||
* values will be provided when we execute the model. These values can change
|
||||
* from execution to execution.
|
||||
*
|
||||
* Besides the two input tensors, an optional fused activation function can
|
||||
* also be defined for ADD and MUL. In this example, we'll simply set it to
|
||||
* NONE.
|
||||
*
|
||||
* The graph then has 10 operands:
|
||||
* - 2 tensors that are inputs to the model. These are fed to the two
|
||||
* ADD operations.
|
||||
* - 2 constant tensors that are the other two inputs to the ADD operations.
|
||||
* - 1 fuse activation operand reused for the ADD operations and the MUL
|
||||
* operation.
|
||||
* - 2 intermediate tensors, representing outputs of the ADD operations and
|
||||
* inputs to the MUL operation.
|
||||
* - 1 model output.
|
||||
*
|
||||
* @return true for success, false otherwise
|
||||
*/
|
||||
bool SimpleModel::CreateCompiledModel() {
|
||||
int32_t status;
|
||||
|
||||
// Create the ANeuralNetworksModel handle.
|
||||
status = ANeuralNetworksModel_create(&model_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_create failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
uint32_t dimensions[] = {dimLength_};
|
||||
ANeuralNetworksOperandType float32TensorType{
|
||||
.type = ANEURALNETWORKS_TENSOR_FLOAT32,
|
||||
.dimensionCount = sizeof(dimensions) / sizeof(dimensions[0]),
|
||||
.dimensions = dimensions,
|
||||
.scale = 0.0f,
|
||||
.zeroPoint = 0,
|
||||
};
|
||||
ANeuralNetworksOperandType scalarInt32Type{
|
||||
.type = ANEURALNETWORKS_INT32,
|
||||
.dimensionCount = 0,
|
||||
.dimensions = nullptr,
|
||||
.scale = 0.0f,
|
||||
.zeroPoint = 0,
|
||||
};
|
||||
|
||||
/**
|
||||
* Add operands and operations to construct the model.
|
||||
*
|
||||
* Operands are implicitly identified by the order in which they are added to
|
||||
* the model, starting from 0.
|
||||
*
|
||||
* These indexes are not returned by the model_addOperand call. The
|
||||
* application must manage these values. Here, we use opIdx to do the
|
||||
* bookkeeping.
|
||||
*/
|
||||
uint32_t opIdx = 0;
|
||||
|
||||
// We first add the operand for the NONE activation function, and set its
|
||||
// value to ANEURALNETWORKS_FUSED_NONE.
|
||||
// This constant scalar operand will be used for all 3 operations.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &scalarInt32Type);
|
||||
uint32_t fusedActivationFuncNone = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)",
|
||||
fusedActivationFuncNone);
|
||||
return false;
|
||||
}
|
||||
|
||||
FuseCode fusedActivationCodeValue = ANEURALNETWORKS_FUSED_NONE;
|
||||
status = ANeuralNetworksModel_setOperandValue(
|
||||
model_, fusedActivationFuncNone, &fusedActivationCodeValue,
|
||||
sizeof(fusedActivationCodeValue));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_setOperandValue failed for operand (%d)",
|
||||
fusedActivationFuncNone);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Add operands for the tensors.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t tensor0 = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)", tensor0);
|
||||
return false;
|
||||
}
|
||||
// tensor0 is a constant tensor that was established during training.
|
||||
// We read these values from the corresponding ANeuralNetworksMemory object.
|
||||
status = ANeuralNetworksModel_setOperandValueFromMemory(
|
||||
model_, tensor0, memoryModel_, 0, tensorSize_ * sizeof(float));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_setOperandValueFromMemory failed "
|
||||
"for operand (%d)",
|
||||
tensor0);
|
||||
return false;
|
||||
}
|
||||
|
||||
// tensor1 is one of the user provided input tensors to the trained model.
|
||||
// Its value is determined pre-execution.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t tensor1 = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)", tensor1);
|
||||
return false;
|
||||
}
|
||||
|
||||
// tensor2 is a constant tensor that was established during training.
|
||||
// We read these values from the corresponding ANeuralNetworksMemory object.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t tensor2 = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)", tensor2);
|
||||
return false;
|
||||
}
|
||||
status = ANeuralNetworksModel_setOperandValueFromMemory(
|
||||
model_, tensor2, memoryModel_, tensorSize_ * sizeof(float),
|
||||
tensorSize_ * sizeof(float));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_setOperandValueFromMemory failed "
|
||||
"for operand (%d)",
|
||||
tensor2);
|
||||
return false;
|
||||
}
|
||||
|
||||
// tensor3 is one of the user provided input tensors to the trained model.
|
||||
// Its value is determined pre-execution.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t tensor3 = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)", tensor3);
|
||||
return false;
|
||||
}
|
||||
|
||||
// intermediateOutput0 is the output of the first ADD operation.
|
||||
// Its value is computed during execution.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t intermediateOutput0 = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)",
|
||||
intermediateOutput0);
|
||||
return false;
|
||||
}
|
||||
|
||||
// intermediateOutput1 is the output of the second ADD operation.
|
||||
// Its value is computed during execution.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t intermediateOutput1 = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)",
|
||||
intermediateOutput1);
|
||||
return false;
|
||||
}
|
||||
|
||||
// multiplierOutput is the output of the MUL operation.
|
||||
// Its value will be computed during execution.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t multiplierOutput = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)",
|
||||
multiplierOutput);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Add the first ADD operation.
|
||||
std::vector<uint32_t> add1InputOperands = {
|
||||
tensor0,
|
||||
tensor1,
|
||||
fusedActivationFuncNone,
|
||||
};
|
||||
status = ANeuralNetworksModel_addOperation(
|
||||
model_, ANEURALNETWORKS_ADD, add1InputOperands.size(),
|
||||
add1InputOperands.data(), 1, &intermediateOutput0);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperation failed for ADD_1");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Add the second ADD operation.
|
||||
// Note the fusedActivationFuncNone is used again.
|
||||
std::vector<uint32_t> add2InputOperands = {
|
||||
tensor2,
|
||||
tensor3,
|
||||
fusedActivationFuncNone,
|
||||
};
|
||||
status = ANeuralNetworksModel_addOperation(
|
||||
model_, ANEURALNETWORKS_ADD, add2InputOperands.size(),
|
||||
add2InputOperands.data(), 1, &intermediateOutput1);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperation failed for ADD_2");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Add the MUL operation.
|
||||
// Note that intermediateOutput0 and intermediateOutput1 are specified
|
||||
// as inputs to the operation.
|
||||
std::vector<uint32_t> mulInputOperands = {
|
||||
intermediateOutput0, intermediateOutput1, fusedActivationFuncNone};
|
||||
status = ANeuralNetworksModel_addOperation(
|
||||
model_, ANEURALNETWORKS_MUL, mulInputOperands.size(),
|
||||
mulInputOperands.data(), 1, &multiplierOutput);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperation failed for MUL");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Identify the input and output tensors to the model.
|
||||
// Inputs: {tensor1, tensor3}
|
||||
// Outputs: {multiplierOutput}
|
||||
std::vector<uint32_t> modelInputOperands = {
|
||||
tensor1,
|
||||
tensor3,
|
||||
};
|
||||
status = ANeuralNetworksModel_identifyInputsAndOutputs(
|
||||
model_, modelInputOperands.size(), modelInputOperands.data(), 1,
|
||||
&multiplierOutput);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_identifyInputsAndOutputs failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Finish constructing the model.
|
||||
// The values of constant and intermediate operands cannot be altered after
|
||||
// the finish function is called.
|
||||
status = ANeuralNetworksModel_finish(model_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_finish failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Create the ANeuralNetworksCompilation object for the constructed model.
|
||||
status = ANeuralNetworksCompilation_create(model_, &compilation_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksCompilation_create failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Set the preference for the compilation, so that the runtime and drivers
|
||||
// can make better decisions.
|
||||
// Here we prefer to get the answer quickly, so we choose
|
||||
// ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWER.
|
||||
status = ANeuralNetworksCompilation_setPreference(
|
||||
compilation_, ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWER);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksCompilation_setPreference failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Finish the compilation.
|
||||
status = ANeuralNetworksCompilation_finish(compilation_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksCompilation_finish failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compute with the given input data.
|
||||
* @param modelInputs:
|
||||
* inputValue1: The values to fill tensor1
|
||||
* inputValue2: The values to fill tensor3
|
||||
* @return computed result, or 0.0f if there is error.
|
||||
*/
|
||||
bool SimpleModel::Compute(float inputValue1, float inputValue2, float* result) {
|
||||
if (!result) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Create an ANeuralNetworksExecution object from the compiled model.
|
||||
// Note:
|
||||
// 1. All the input and output data are tied to the ANeuralNetworksExecution
|
||||
// object.
|
||||
// 2. Multiple concurrent execution instances could be created from the same
|
||||
// compiled model.
|
||||
// This sample only uses one execution of the compiled model.
|
||||
ANeuralNetworksExecution* execution;
|
||||
int32_t status = ANeuralNetworksExecution_create(compilation_, &execution);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksExecution_create failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Set all the elements of the first input tensor (tensor1) to the same value
|
||||
// as inputValue1. It's not a realistic example but it shows how to pass a
|
||||
// small tensor to an execution.
|
||||
std::fill(inputTensor1_.data(), inputTensor1_.data() + tensorSize_,
|
||||
inputValue1);
|
||||
|
||||
// Tell the execution to associate inputTensor1 to the first of the two model
|
||||
// inputs. Note that the index "0" here means the first operand of the
|
||||
// modelInput list {tensor1, tensor3}, which means tensor1.
|
||||
status = ANeuralNetworksExecution_setInput(
|
||||
execution, 0, nullptr, inputTensor1_.data(), tensorSize_ * sizeof(float));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksExecution_setInput failed for input1");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Set the values of the second input operand (tensor3) to be inputValue2.
|
||||
// In reality, the values in the shared memory region will be manipulated by
|
||||
// other modules or processes.
|
||||
float* inputTensor2Ptr = reinterpret_cast<float*>(
|
||||
mmap(nullptr, tensorSize_ * sizeof(float), PROT_READ | PROT_WRITE,
|
||||
MAP_SHARED, inputTensor2Fd_, 0));
|
||||
for (int i = 0; i < tensorSize_; i++) {
|
||||
*inputTensor2Ptr = inputValue2;
|
||||
inputTensor2Ptr++;
|
||||
}
|
||||
munmap(inputTensor2Ptr, tensorSize_ * sizeof(float));
|
||||
|
||||
// ANeuralNetworksExecution_setInputFromMemory associates the operand with a
|
||||
// shared memory region to minimize the number of copies of raw data. Note
|
||||
// that the index "1" here means the second operand of the modelInput list
|
||||
// {tensor1, tensor3}, which means tensor3.
|
||||
status = ANeuralNetworksExecution_setInputFromMemory(
|
||||
execution, 1, nullptr, memoryInput2_, 0, tensorSize_ * sizeof(float));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksExecution_setInputFromMemory failed for input2");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Set the output tensor that will be filled by executing the model.
|
||||
// We use shared memory here to minimize the copies needed for getting the
|
||||
// output data.
|
||||
status = ANeuralNetworksExecution_setOutputFromMemory(
|
||||
execution, 0, nullptr, memoryOutput_, 0, tensorSize_ * sizeof(float));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksExecution_setOutputFromMemory failed for output");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Start the execution of the model.
|
||||
// Note that the execution here is asynchronous, and an ANeuralNetworksEvent
|
||||
// object will be created to monitor the status of the execution.
|
||||
ANeuralNetworksEvent* event = nullptr;
|
||||
status = ANeuralNetworksExecution_startCompute(execution, &event);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksExecution_startCompute failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Wait until the completion of the execution. This could be done on a
|
||||
// different thread. By waiting immediately, we effectively make this a
|
||||
// synchronous call.
|
||||
status = ANeuralNetworksEvent_wait(event);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksEvent_wait failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
ANeuralNetworksEvent_free(event);
|
||||
ANeuralNetworksExecution_free(execution);
|
||||
|
||||
// Validate the results.
|
||||
const float goldenRef = (inputValue1 + 0.5f) * (inputValue2 + 0.5f);
|
||||
float* outputTensorPtr =
|
||||
reinterpret_cast<float*>(mmap(nullptr, tensorSize_ * sizeof(float),
|
||||
PROT_READ, MAP_SHARED, outputTensorFd_, 0));
|
||||
for (int32_t idx = 0; idx < tensorSize_; idx++) {
|
||||
float delta = outputTensorPtr[idx] - goldenRef;
|
||||
delta = (delta < 0.0f) ? (-delta) : delta;
|
||||
if (delta > FLOAT_EPISILON) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"Output computation Error: output0(%f), delta(%f) @ idx(%d)",
|
||||
outputTensorPtr[0], delta, idx);
|
||||
}
|
||||
}
|
||||
*result = outputTensorPtr[0];
|
||||
munmap(outputTensorPtr, tensorSize_ * sizeof(float));
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* SimpleModel Destructor.
|
||||
*
|
||||
* Release NN API objects and close the file descriptors.
|
||||
*/
|
||||
SimpleModel::~SimpleModel() {
|
||||
ANeuralNetworksCompilation_free(compilation_);
|
||||
ANeuralNetworksModel_free(model_);
|
||||
ANeuralNetworksMemory_free(memoryModel_);
|
||||
ANeuralNetworksMemory_free(memoryInput2_);
|
||||
ANeuralNetworksMemory_free(memoryOutput_);
|
||||
close(inputTensor2Fd_);
|
||||
close(outputTensorFd_);
|
||||
}
|
||||
@@ -1,64 +0,0 @@
|
||||
/**
|
||||
* Copyright 2017 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#ifndef NNAPI_SIMPLE_MODEL_H
|
||||
#define NNAPI_SIMPLE_MODEL_H
|
||||
|
||||
#include <android/NeuralNetworks.h>
|
||||
#include <android/asset_manager_jni.h>
|
||||
|
||||
#include <vector>
|
||||
|
||||
#define FLOAT_EPISILON (1e-6)
|
||||
#define TENSOR_SIZE 200
|
||||
#define LOG_TAG "NNAPI_BASIC"
|
||||
|
||||
/**
|
||||
* SimpleModel
|
||||
* Build up the hardcoded graph of
|
||||
* ADD_1 ---+
|
||||
* +--- MUL--->output result
|
||||
* ADD_2 ---+
|
||||
*
|
||||
* Operands are all 2-D TENSOR_FLOAT32 of:
|
||||
* dimLength x dimLength
|
||||
* with NO fused_activation operation
|
||||
*
|
||||
*/
|
||||
class SimpleModel {
|
||||
public:
|
||||
explicit SimpleModel(AAsset* asset);
|
||||
~SimpleModel();
|
||||
|
||||
bool CreateCompiledModel();
|
||||
bool Compute(float inputValue1, float inputValue2, float* result);
|
||||
|
||||
private:
|
||||
ANeuralNetworksModel* model_;
|
||||
ANeuralNetworksCompilation* compilation_;
|
||||
ANeuralNetworksMemory* memoryModel_;
|
||||
ANeuralNetworksMemory* memoryInput2_;
|
||||
ANeuralNetworksMemory* memoryOutput_;
|
||||
|
||||
uint32_t dimLength_;
|
||||
uint32_t tensorSize_;
|
||||
|
||||
std::vector<float> inputTensor1_;
|
||||
int inputTensor2Fd_;
|
||||
int outputTensorFd_;
|
||||
};
|
||||
|
||||
#endif // NNAPI_SIMPLE_MODEL_H
|
||||
@@ -1,85 +0,0 @@
|
||||
/**
|
||||
* Copyright 2017 The Android Open Source Project
|
||||
*
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package com.example.android.basic
|
||||
|
||||
import android.app.Activity
|
||||
import android.content.res.AssetManager
|
||||
import android.os.Bundle
|
||||
import android.widget.Toast
|
||||
import com.example.android.basic.databinding.ActivityMainBinding
|
||||
import kotlinx.coroutines.*
|
||||
|
||||
/*
|
||||
MainActivity to take care of UI and user inputs
|
||||
*/
|
||||
class MainActivity : Activity() {
|
||||
private var modelHandle = 0L
|
||||
|
||||
/*
|
||||
3 JNI functions managing NN models, refer to basic/README.md
|
||||
for model structure
|
||||
*/
|
||||
private external fun initModel(assetManager: AssetManager?, assetName: String?): Long
|
||||
private external fun startCompute(modelHandle: Long, input1: Float, input2: Float): Float
|
||||
private external fun destroyModel(modelHandle: Long)
|
||||
|
||||
private lateinit var binding: ActivityMainBinding
|
||||
private val activityJob = Job()
|
||||
|
||||
override fun onCreate(savedInstanceState: Bundle?) {
|
||||
super.onCreate(savedInstanceState)
|
||||
binding = ActivityMainBinding.inflate(layoutInflater)
|
||||
setContentView(binding.root)
|
||||
CoroutineScope(Dispatchers.IO + activityJob).async(Dispatchers.IO) {
|
||||
modelHandle = this@MainActivity.initModel(assets, "model_data.bin")
|
||||
}
|
||||
|
||||
binding.computButton.setOnClickListener {
|
||||
if (modelHandle == 0L) {
|
||||
Toast.makeText(applicationContext, "Model initializing, please wait",
|
||||
Toast.LENGTH_SHORT).show()
|
||||
}
|
||||
|
||||
if (binding.tensorSeed0.text.isNotEmpty() && binding.tensorSeed2.text.isNotEmpty()) {
|
||||
Toast.makeText(applicationContext, "Computing", Toast.LENGTH_SHORT).show()
|
||||
binding.computeResult.text = runBlocking {
|
||||
val operand0 = binding.tensorSeed0.text.toString().toFloat()
|
||||
val operand2 = binding.tensorSeed2.text.toString().toFloat()
|
||||
startCompute(modelHandle, operand0, operand2).toString()
|
||||
}.toString()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
override fun onDestroy() {
|
||||
activityJob.cancel()
|
||||
if (modelHandle != 0L) {
|
||||
destroyModel(modelHandle)
|
||||
modelHandle = 0
|
||||
}
|
||||
super.onDestroy()
|
||||
}
|
||||
|
||||
companion object {
|
||||
init {
|
||||
System.loadLibrary("basic")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,34 +0,0 @@
|
||||
<vector xmlns:android="http://schemas.android.com/apk/res/android"
|
||||
xmlns:aapt="http://schemas.android.com/aapt"
|
||||
android:width="108dp"
|
||||
android:height="108dp"
|
||||
android:viewportHeight="108"
|
||||
android:viewportWidth="108">
|
||||
<path
|
||||
android:fillType="evenOdd"
|
||||
android:pathData="M32,64C32,64 38.39,52.99 44.13,50.95C51.37,48.37 70.14,49.57 70.14,49.57L108.26,87.69L108,109.01L75.97,107.97L32,64Z"
|
||||
android:strokeColor="#00000000"
|
||||
android:strokeWidth="1">
|
||||
<aapt:attr name="android:fillColor">
|
||||
<gradient
|
||||
android:endX="78.5885"
|
||||
android:endY="90.9159"
|
||||
android:startX="48.7653"
|
||||
android:startY="61.0927"
|
||||
android:type="linear">
|
||||
<item
|
||||
android:color="#44000000"
|
||||
android:offset="0.0" />
|
||||
<item
|
||||
android:color="#00000000"
|
||||
android:offset="1.0" />
|
||||
</gradient>
|
||||
</aapt:attr>
|
||||
</path>
|
||||
<path
|
||||
android:fillColor="#FFFFFF"
|
||||
android:fillType="nonZero"
|
||||
android:pathData="M66.94,46.02L66.94,46.02C72.44,50.07 76,56.61 76,64L32,64C32,56.61 35.56,50.11 40.98,46.06L36.18,41.19C35.45,40.45 35.45,39.3 36.18,38.56C36.91,37.81 38.05,37.81 38.78,38.56L44.25,44.05C47.18,42.57 50.48,41.71 54,41.71C57.48,41.71 60.78,42.57 63.68,44.05L69.11,38.56C69.84,37.81 70.98,37.81 71.71,38.56C72.44,39.3 72.44,40.45 71.71,41.19L66.94,46.02ZM62.94,56.92C64.08,56.92 65,56.01 65,54.88C65,53.76 64.08,52.85 62.94,52.85C61.8,52.85 60.88,53.76 60.88,54.88C60.88,56.01 61.8,56.92 62.94,56.92ZM45.06,56.92C46.2,56.92 47.13,56.01 47.13,54.88C47.13,53.76 46.2,52.85 45.06,52.85C43.92,52.85 43,53.76 43,54.88C43,56.01 43.92,56.92 45.06,56.92Z"
|
||||
android:strokeColor="#00000000"
|
||||
android:strokeWidth="1" />
|
||||
</vector>
|
||||
@@ -1,170 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<vector xmlns:android="http://schemas.android.com/apk/res/android"
|
||||
android:width="108dp"
|
||||
android:height="108dp"
|
||||
android:viewportHeight="108"
|
||||
android:viewportWidth="108">
|
||||
<path
|
||||
android:fillColor="#26A69A"
|
||||
android:pathData="M0,0h108v108h-108z" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M9,0L9,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,0L19,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M29,0L29,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M39,0L39,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M49,0L49,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M59,0L59,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M69,0L69,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M79,0L79,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M89,0L89,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M99,0L99,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,9L108,9"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,19L108,19"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,29L108,29"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,39L108,39"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,49L108,49"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,59L108,59"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,69L108,69"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,79L108,79"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,89L108,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,99L108,99"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,29L89,29"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,39L89,39"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,49L89,49"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,59L89,59"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,69L89,69"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,79L89,79"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M29,19L29,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M39,19L39,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M49,19L49,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M59,19L59,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M69,19L69,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M79,19L79,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
</vector>
|
||||
@@ -1,115 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
|
||||
xmlns:app="http://schemas.android.com/apk/res-auto"
|
||||
xmlns:tools="http://schemas.android.com/tools"
|
||||
android:layout_width="match_parent"
|
||||
android:layout_height="match_parent"
|
||||
tools:context="com.example.android.basic.MainActivity">
|
||||
|
||||
<Button
|
||||
android:id="@+id/computButton"
|
||||
android:layout_width="wrap_content"
|
||||
android:layout_height="wrap_content"
|
||||
android:layout_marginBottom="52dp"
|
||||
android:layout_marginTop="8dp"
|
||||
android:text="@string/compute"
|
||||
app:layout_constraintBottom_toBottomOf="parent"
|
||||
app:layout_constraintEnd_toEndOf="parent"
|
||||
app:layout_constraintStart_toStartOf="parent"
|
||||
app:layout_constraintTop_toBottomOf="@+id/computeResult"
|
||||
tools:text="@string/compute" />
|
||||
|
||||
<EditText
|
||||
android:id="@+id/tensorSeed0"
|
||||
android:layout_width="161dp"
|
||||
android:layout_height="wrap_content"
|
||||
android:layout_marginEnd="96dp"
|
||||
android:layout_marginTop="24dp"
|
||||
android:ems="10"
|
||||
android:inputType="numberDecimal"
|
||||
android:textAlignment="center"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="parent"
|
||||
app:layout_constraintTop_toTopOf="parent" />
|
||||
|
||||
<EditText
|
||||
android:id="@+id/tensorSeed2"
|
||||
android:layout_width="161dp"
|
||||
android:layout_height="wrap_content"
|
||||
android:layout_marginStart="8dp"
|
||||
android:layout_marginTop="20dp"
|
||||
android:ems="10"
|
||||
android:inputType="numberDecimal"
|
||||
android:textAlignment="center"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="@+id/tensorSeed0"
|
||||
app:layout_constraintHorizontal_bias="1.0"
|
||||
app:layout_constraintStart_toStartOf="@+id/tensorSeed0"
|
||||
app:layout_constraintTop_toBottomOf="@+id/tensorSeed0" />
|
||||
|
||||
<TextView
|
||||
android:id="@+id/computeResult"
|
||||
android:layout_width="161dp"
|
||||
android:layout_height="32dp"
|
||||
android:layout_marginEnd="8dp"
|
||||
android:layout_marginTop="104dp"
|
||||
android:text="@string/none"
|
||||
android:textAlignment="center"
|
||||
android:textAllCaps="false"
|
||||
android:textAppearance="@android:style/TextAppearance"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="@+id/tensorSeed2"
|
||||
app:layout_constraintHorizontal_bias="0.0"
|
||||
app:layout_constraintStart_toStartOf="@+id/tensorSeed2"
|
||||
app:layout_constraintTop_toBottomOf="@+id/tensorSeed2"
|
||||
tools:text="@string/none" />
|
||||
|
||||
<TextView
|
||||
android:id="@+id/resultLabel"
|
||||
android:layout_width="wrap_content"
|
||||
android:layout_height="32dp"
|
||||
android:layout_marginStart="8dp"
|
||||
android:layout_marginTop="104dp"
|
||||
android:text="@string/result"
|
||||
android:textAppearance="@android:style/TextAppearance"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="@+id/tensorLabel2"
|
||||
app:layout_constraintHorizontal_bias="1.0"
|
||||
app:layout_constraintStart_toStartOf="@+id/tensorLabel2"
|
||||
app:layout_constraintTop_toBottomOf="@+id/tensorLabel2"
|
||||
tools:text="@string/result" />
|
||||
|
||||
<TextView
|
||||
android:id="@+id/tensorLabel0"
|
||||
android:layout_width="wrap_content"
|
||||
android:layout_height="32dp"
|
||||
android:layout_marginStart="16dp"
|
||||
android:layout_marginTop="32dp"
|
||||
android:layout_marginEnd="8dp"
|
||||
android:text="@string/label0"
|
||||
android:textAppearance="@android:style/TextAppearance"
|
||||
android:textSize="18sp"
|
||||
android:visibility="visible"
|
||||
app:layout_constraintEnd_toStartOf="@+id/tensorSeed0"
|
||||
app:layout_constraintHorizontal_bias="0.446"
|
||||
app:layout_constraintStart_toStartOf="parent"
|
||||
app:layout_constraintTop_toTopOf="parent"
|
||||
tools:text="@string/label0" />
|
||||
|
||||
<TextView
|
||||
android:id="@+id/tensorLabel2"
|
||||
android:layout_width="wrap_content"
|
||||
android:layout_height="32dp"
|
||||
android:layout_marginStart="8dp"
|
||||
android:layout_marginTop="36dp"
|
||||
android:text="@string/label2"
|
||||
android:textAppearance="@android:style/TextAppearance"
|
||||
android:textSize="18sp"
|
||||
android:visibility="visible"
|
||||
app:layout_constraintEnd_toEndOf="@+id/tensorLabel0"
|
||||
app:layout_constraintHorizontal_bias="1.0"
|
||||
app:layout_constraintStart_toStartOf="@+id/tensorLabel0"
|
||||
app:layout_constraintTop_toBottomOf="@+id/tensorLabel0"
|
||||
tools:text="@string/label2" />
|
||||
|
||||
</androidx.constraintlayout.widget.ConstraintLayout>
|
||||
@@ -1,5 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android">
|
||||
<background android:drawable="@drawable/ic_launcher_background" />
|
||||
<foreground android:drawable="@drawable/ic_launcher_foreground" />
|
||||
</adaptive-icon>
|
||||
@@ -1,5 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android">
|
||||
<background android:drawable="@drawable/ic_launcher_background" />
|
||||
<foreground android:drawable="@drawable/ic_launcher_foreground" />
|
||||
</adaptive-icon>
|
||||
|
Before Width: | Height: | Size: 3.0 KiB |
|
Before Width: | Height: | Size: 4.9 KiB |
|
Before Width: | Height: | Size: 2.0 KiB |
|
Before Width: | Height: | Size: 2.8 KiB |
|
Before Width: | Height: | Size: 4.5 KiB |
|
Before Width: | Height: | Size: 6.9 KiB |
|
Before Width: | Height: | Size: 6.3 KiB |
|
Before Width: | Height: | Size: 10 KiB |
|
Before Width: | Height: | Size: 9.0 KiB |
|
Before Width: | Height: | Size: 15 KiB |
@@ -1,6 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<resources>
|
||||
<color name="colorPrimary">#3F51B5</color>
|
||||
<color name="colorPrimaryDark">#303F9F</color>
|
||||
<color name="colorAccent">#FF4081</color>
|
||||
</resources>
|
||||
@@ -1,8 +0,0 @@
|
||||
<resources>
|
||||
<string name="app_name">NN API Demo: basic</string>
|
||||
<string name="compute">Compute</string>
|
||||
<string name="result">Result: </string>
|
||||
<string name="label0">Augend0: </string>
|
||||
<string name="label2">Augend2: </string>
|
||||
<string name="none">None</string>
|
||||
</resources>
|
||||
@@ -1,8 +0,0 @@
|
||||
<resources>
|
||||
|
||||
<!-- Base application theme. -->
|
||||
<style name="AppTheme" parent="android:Theme.Material.Light.DarkActionBar">
|
||||
<!-- Customize your theme here. -->
|
||||
</style>
|
||||
|
||||
</resources>
|
||||
1
nn-samples/sequence/.gitignore
vendored
@@ -1 +0,0 @@
|
||||
/build
|
||||
@@ -1,42 +0,0 @@
|
||||
# Android Neural Networks API Sample: Sequence
|
||||
|
||||
Android Neural Networks API (NN API) Sample demonstrates basic usages of NN API
|
||||
with a sequence model that consists of two operations: one addition and one
|
||||
multiplication. This graph is used for computing a single step of accumulating a
|
||||
geometric progression.
|
||||
|
||||
```
|
||||
sumIn ---+
|
||||
+--- ADD ---> sumOut
|
||||
stateIn ---+
|
||||
+--- MUL ---> stateOut
|
||||
ratio ---+
|
||||
```
|
||||
|
||||
The ratio is a constant tensor, defined in the model. It represents the weights
|
||||
that would have been learned during a training process. The sumIn and stateIn
|
||||
are input tensors. Their values will be provided when we execute the model.
|
||||
These values can change from execution to execution. To compute the sum of a
|
||||
geometric progression, the graph will be executed multiple times with inputs and
|
||||
outputs chained together.
|
||||
|
||||
```
|
||||
+----------+ +----------+ +----------+
|
||||
initialSum -->| Simple |-->| Simple |--> -->| Simple |--> sumOut
|
||||
| Sequence | | Sequence | ... | Sequence |
|
||||
initialState -->| Model |-->| Model |--> -->| Model |--> stateOut
|
||||
+----------+ +----------+ +----------+
|
||||
```
|
||||
|
||||
## Additional Requirements
|
||||
|
||||
- Android 11 SDK to compile
|
||||
- A device running Android 11
|
||||
|
||||
Note: This sample uses its own wrapper to access new NNAPI features in Android
|
||||
11 due to an known issue. This will be updated after the issue is fixed with the
|
||||
next R SDK release.
|
||||
|
||||
## Screenshots
|
||||
|
||||
<img src="screenshot.png" width="480">
|
||||
@@ -1,28 +0,0 @@
|
||||
plugins {
|
||||
id "ndksamples.android.application"
|
||||
}
|
||||
|
||||
android {
|
||||
namespace 'com.example.android.sequence'
|
||||
|
||||
defaultConfig {
|
||||
applicationId "com.example.android.sequence"
|
||||
minSdkVersion 30
|
||||
versionCode 1
|
||||
versionName "1.0"
|
||||
}
|
||||
|
||||
externalNativeBuild {
|
||||
cmake {
|
||||
path "src/main/cpp/CMakeLists.txt"
|
||||
}
|
||||
}
|
||||
|
||||
androidResources {
|
||||
noCompress 'bin'
|
||||
}
|
||||
}
|
||||
|
||||
dependencies {
|
||||
implementation libs.androidx.constraintlayout
|
||||
}
|
||||
21
nn-samples/sequence/proguard-rules.pro
vendored
@@ -1,21 +0,0 @@
|
||||
# Add project specific ProGuard rules here.
|
||||
# You can control the set of applied configuration files using the
|
||||
# proguardFiles setting in build.gradle.
|
||||
#
|
||||
# For more details, see
|
||||
# http://developer.android.com/guide/developing/tools/proguard.html
|
||||
|
||||
# If your project uses WebView with JS, uncomment the following
|
||||
# and specify the fully qualified class name to the JavaScript interface
|
||||
# class:
|
||||
#-keepclassmembers class fqcn.of.javascript.interface.for.webview {
|
||||
# public *;
|
||||
#}
|
||||
|
||||
# Uncomment this to preserve the line number information for
|
||||
# debugging stack traces.
|
||||
#-keepattributes SourceFile,LineNumberTable
|
||||
|
||||
# If you keep the line number information, uncomment this to
|
||||
# hide the original source file name.
|
||||
#-renamesourcefileattribute SourceFile
|
||||
|
Before Width: | Height: | Size: 77 KiB |
@@ -1,20 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
|
||||
|
||||
<application
|
||||
android:allowBackup="true"
|
||||
android:icon="@mipmap/ic_launcher"
|
||||
android:label="@string/app_name"
|
||||
android:roundIcon="@mipmap/ic_launcher_round"
|
||||
android:supportsRtl="true"
|
||||
android:theme="@style/AppTheme">
|
||||
<activity android:name=".MainActivity"
|
||||
android:exported="true">
|
||||
<intent-filter>
|
||||
<action android:name="android.intent.action.MAIN" />
|
||||
|
||||
<category android:name="android.intent.category.LAUNCHER" />
|
||||
</intent-filter>
|
||||
</activity>
|
||||
</application>
|
||||
</manifest>
|
||||
@@ -1,15 +0,0 @@
|
||||
cmake_minimum_required(VERSION 3.22.1)
|
||||
project(NnSamplesSequence LANGUAGES CXX)
|
||||
|
||||
add_library(sequence
|
||||
SHARED
|
||||
sequence.cpp
|
||||
sequence_model.cpp
|
||||
)
|
||||
|
||||
target_link_libraries(sequence
|
||||
# Link with libneuralnetworks.so for NN API
|
||||
neuralnetworks
|
||||
android
|
||||
log
|
||||
)
|
||||
@@ -1,62 +0,0 @@
|
||||
/**
|
||||
* Copyright 2020 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#include <android/asset_manager_jni.h>
|
||||
#include <android/log.h>
|
||||
#include <android/sharedmem.h>
|
||||
#include <fcntl.h>
|
||||
#include <jni.h>
|
||||
#include <sys/mman.h>
|
||||
|
||||
#include <iomanip>
|
||||
#include <sstream>
|
||||
#include <string>
|
||||
|
||||
#include "sequence_model.h"
|
||||
|
||||
extern "C" JNIEXPORT jlong JNICALL
|
||||
Java_com_example_android_sequence_MainActivity_initModel(JNIEnv* env,
|
||||
jobject /* this */,
|
||||
jfloat ratio) {
|
||||
auto model = SimpleSequenceModel::Create(ratio);
|
||||
if (model == nullptr) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"Failed to create the model.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
return (jlong)(uintptr_t)model.release();
|
||||
}
|
||||
|
||||
extern "C" JNIEXPORT jfloat JNICALL
|
||||
Java_com_example_android_sequence_MainActivity_compute(JNIEnv* env,
|
||||
jobject /* this */,
|
||||
jfloat initialValue,
|
||||
jint steps,
|
||||
jlong _nnModel) {
|
||||
SimpleSequenceModel* nn_model = (SimpleSequenceModel*)_nnModel;
|
||||
float result = 0.0f;
|
||||
nn_model->Compute(initialValue, static_cast<uint32_t>(steps), &result);
|
||||
return result;
|
||||
}
|
||||
|
||||
extern "C" JNIEXPORT void JNICALL
|
||||
Java_com_example_android_sequence_MainActivity_destroyModel(JNIEnv* env,
|
||||
jobject /* this */,
|
||||
jlong _nnModel) {
|
||||
SimpleSequenceModel* nn_model = (SimpleSequenceModel*)_nnModel;
|
||||
delete (nn_model);
|
||||
}
|
||||
@@ -1,724 +0,0 @@
|
||||
/**
|
||||
* Copyright 2020 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
#include "sequence_model.h"
|
||||
|
||||
#include <android/log.h>
|
||||
#include <android/sharedmem.h>
|
||||
#include <sys/mman.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include <algorithm>
|
||||
#include <string>
|
||||
#include <utility>
|
||||
#include <vector>
|
||||
|
||||
/**
|
||||
* A helper method to allocate an ASharedMemory region and create an
|
||||
* ANeuralNetworksMemory object.
|
||||
*/
|
||||
static std::pair<int, ANeuralNetworksMemory*> CreateASharedMemory(
|
||||
const char* name, uint32_t size, int prot) {
|
||||
int fd = ASharedMemory_create(name, size * sizeof(float));
|
||||
|
||||
// Create an ANeuralNetworksMemory object from the corresponding ASharedMemory
|
||||
// objects.
|
||||
ANeuralNetworksMemory* memory = nullptr;
|
||||
int32_t status = ANeuralNetworksMemory_createFromFd(size * sizeof(float),
|
||||
prot, fd, 0, &memory);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemory_createFromFd failed for %s",
|
||||
name);
|
||||
close(fd);
|
||||
return {-1, nullptr};
|
||||
}
|
||||
|
||||
return {fd, memory};
|
||||
}
|
||||
|
||||
/**
|
||||
* A helper method to fill the ASharedMemory region with the given value.
|
||||
*/
|
||||
static void fillMemory(int fd, uint32_t size, float value) {
|
||||
// Set the values of the memory.
|
||||
// In reality, the values in the shared memory region will be manipulated by
|
||||
// other modules or processes.
|
||||
float* data =
|
||||
reinterpret_cast<float*>(mmap(nullptr, size * sizeof(float),
|
||||
PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0));
|
||||
std::fill(data, data + size, value);
|
||||
munmap(data, size * sizeof(float));
|
||||
}
|
||||
|
||||
/**
|
||||
* Factory method of SimpleSequenceModel.
|
||||
*
|
||||
* Create and initialize the model, compilation, and memories associated
|
||||
* with the computation graph.
|
||||
*
|
||||
* @return A pointer to the created model on success, nullptr otherwise
|
||||
*/
|
||||
std::unique_ptr<SimpleSequenceModel> SimpleSequenceModel::Create(float ratio) {
|
||||
auto model = std::make_unique<SimpleSequenceModel>(ratio);
|
||||
if (model->CreateSharedMemories() && model->CreateModel() &&
|
||||
model->CreateCompilation() && model->CreateOpaqueMemories()) {
|
||||
return model;
|
||||
}
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
/**
|
||||
* SimpleSequenceModel Constructor.
|
||||
*/
|
||||
SimpleSequenceModel::SimpleSequenceModel(float ratio) : ratio_(ratio) {}
|
||||
|
||||
/**
|
||||
* Initialize the shared memory objects. In reality, the values in the shared
|
||||
* memory region will be manipulated by other modules or processes.
|
||||
*
|
||||
* @return true for success, false otherwise
|
||||
*/
|
||||
bool SimpleSequenceModel::CreateSharedMemories() {
|
||||
// Create ASharedMemory to hold the data for initial state, ratio, and sums.
|
||||
std::tie(initialStateFd_, memoryInitialState_) =
|
||||
CreateASharedMemory("initialState", tensorSize_, PROT_READ);
|
||||
std::tie(ratioFd_, memoryRatio_) =
|
||||
CreateASharedMemory("ratio", tensorSize_, PROT_READ);
|
||||
std::tie(sumInFd_, memorySumIn_) =
|
||||
CreateASharedMemory("sumIn", tensorSize_, PROT_READ | PROT_WRITE);
|
||||
std::tie(sumOutFd_, memorySumOut_) =
|
||||
CreateASharedMemory("sumOut", tensorSize_, PROT_READ | PROT_WRITE);
|
||||
|
||||
// Initialize the ratio tensor.
|
||||
fillMemory(ratioFd_, tensorSize_, ratio_);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a graph that consists of two operations: one addition and one
|
||||
* multiplication. This graph is used for computing a single step of
|
||||
* accumulating a geometric progression.
|
||||
*
|
||||
* sumIn ---+
|
||||
* +--- ADD ---> sumOut
|
||||
* stateIn ---+
|
||||
* +--- MUL ---> stateOut
|
||||
* ratio ---+
|
||||
*
|
||||
* The ratio is a constant tensor, defined in the model. It represents the
|
||||
* weights that would have been learned during a training process.
|
||||
*
|
||||
* The sumIn and stateIn are input tensors. Their values will be provided when
|
||||
* we execute the model. These values can change from execution to execution.
|
||||
*
|
||||
* To compute the sum of a geometric progression, the graph will be executed
|
||||
* multiple times with inputs and outputs chained together.
|
||||
*
|
||||
* +----------+ +----------+ +----------+
|
||||
* initialSum -->| Simple |-->| Simple |--> -->| Simple |--> sumOut
|
||||
* | Sequence | | Sequence | ... | Sequence |
|
||||
* initialState -->| Model |-->| Model |--> -->| Model |--> stateOut
|
||||
* +----------+ +----------+ +----------+
|
||||
*
|
||||
* @return true for success, false otherwise
|
||||
*/
|
||||
bool SimpleSequenceModel::CreateModel() {
|
||||
int32_t status;
|
||||
|
||||
// Create the ANeuralNetworksModel handle.
|
||||
status = ANeuralNetworksModel_create(&model_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_create failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
uint32_t dimensions[] = {dimLength_, dimLength_};
|
||||
ANeuralNetworksOperandType float32TensorType{
|
||||
.type = ANEURALNETWORKS_TENSOR_FLOAT32,
|
||||
.dimensionCount = sizeof(dimensions) / sizeof(dimensions[0]),
|
||||
.dimensions = dimensions,
|
||||
.scale = 0.0f,
|
||||
.zeroPoint = 0,
|
||||
};
|
||||
ANeuralNetworksOperandType scalarInt32Type{
|
||||
.type = ANEURALNETWORKS_INT32,
|
||||
.dimensionCount = 0,
|
||||
.dimensions = nullptr,
|
||||
.scale = 0.0f,
|
||||
.zeroPoint = 0,
|
||||
};
|
||||
|
||||
/**
|
||||
* Add operands and operations to construct the model.
|
||||
*
|
||||
* Operands are implicitly identified by the order in which they are added to
|
||||
* the model, starting from 0.
|
||||
*
|
||||
* These indexes are not returned by the model_addOperand call. The
|
||||
* application must manage these values. Here, we use opIdx to do the
|
||||
* bookkeeping.
|
||||
*/
|
||||
uint32_t opIdx = 0;
|
||||
|
||||
// We first add the operand for the NONE activation function, and set its
|
||||
// value to ANEURALNETWORKS_FUSED_NONE.
|
||||
// This constant scalar operand will be used for both ADD and MUL.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &scalarInt32Type);
|
||||
uint32_t fusedActivationFuncNone = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)",
|
||||
fusedActivationFuncNone);
|
||||
return false;
|
||||
}
|
||||
FuseCode fusedActivationCodeValue = ANEURALNETWORKS_FUSED_NONE;
|
||||
status = ANeuralNetworksModel_setOperandValue(
|
||||
model_, fusedActivationFuncNone, &fusedActivationCodeValue,
|
||||
sizeof(fusedActivationCodeValue));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_setOperandValue failed for operand (%d)",
|
||||
fusedActivationFuncNone);
|
||||
return false;
|
||||
}
|
||||
|
||||
// sumIn is one of the user provided input tensors to the trained model.
|
||||
// Its value is determined pre-execution.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t sumIn = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)", sumIn);
|
||||
return false;
|
||||
}
|
||||
|
||||
// stateIn is one of the user provided input tensors to the trained model.
|
||||
// Its value is determined pre-execution.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t stateIn = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)", stateIn);
|
||||
return false;
|
||||
}
|
||||
|
||||
// ratio is a constant tensor that was established during training.
|
||||
// We read these values from the corresponding ANeuralNetworksMemory object.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t ratio = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)", ratio);
|
||||
return false;
|
||||
}
|
||||
status = ANeuralNetworksModel_setOperandValueFromMemory(
|
||||
model_, ratio, memoryRatio_, 0, tensorSize_ * sizeof(float));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_setOperandValueFromMemory failed "
|
||||
"for operand (%d)",
|
||||
ratio);
|
||||
return false;
|
||||
}
|
||||
|
||||
// sumOut is the output of the ADD operation.
|
||||
// Its value will be computed during execution.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t sumOut = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)", sumOut);
|
||||
return false;
|
||||
}
|
||||
|
||||
// stateOut is the output of the MUL operation.
|
||||
// Its value will be computed during execution.
|
||||
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
|
||||
uint32_t stateOut = opIdx++;
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperand failed for operand (%d)", stateOut);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Add the ADD operation.
|
||||
std::vector<uint32_t> addInputOperands = {
|
||||
sumIn,
|
||||
stateIn,
|
||||
fusedActivationFuncNone,
|
||||
};
|
||||
status = ANeuralNetworksModel_addOperation(
|
||||
model_, ANEURALNETWORKS_ADD, addInputOperands.size(),
|
||||
addInputOperands.data(), 1, &sumOut);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperation failed for ADD");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Add the MUL operation.
|
||||
std::vector<uint32_t> mulInputOperands = {
|
||||
stateIn,
|
||||
ratio,
|
||||
fusedActivationFuncNone,
|
||||
};
|
||||
status = ANeuralNetworksModel_addOperation(
|
||||
model_, ANEURALNETWORKS_MUL, mulInputOperands.size(),
|
||||
mulInputOperands.data(), 1, &stateOut);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_addOperation failed for MUL");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Identify the input and output tensors to the model.
|
||||
// Inputs: {sumIn, stateIn}
|
||||
// Outputs: {sumOut, stateOut}
|
||||
std::vector<uint32_t> modelInputs = {
|
||||
sumIn,
|
||||
stateIn,
|
||||
};
|
||||
std::vector<uint32_t> modelOutputs = {
|
||||
sumOut,
|
||||
stateOut,
|
||||
};
|
||||
status = ANeuralNetworksModel_identifyInputsAndOutputs(
|
||||
model_, modelInputs.size(), modelInputs.data(), modelOutputs.size(),
|
||||
modelOutputs.data());
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_identifyInputsAndOutputs failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Finish constructing the model.
|
||||
// The values of constant operands cannot be altered after
|
||||
// the finish function is called.
|
||||
status = ANeuralNetworksModel_finish(model_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksModel_finish failed");
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compile the model.
|
||||
*
|
||||
* @return true for success, false otherwise
|
||||
*/
|
||||
bool SimpleSequenceModel::CreateCompilation() {
|
||||
int32_t status;
|
||||
|
||||
// Create the ANeuralNetworksCompilation object for the constructed model.
|
||||
status = ANeuralNetworksCompilation_create(model_, &compilation_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksCompilation_create failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Set the preference for the compilation_, so that the runtime and drivers
|
||||
// can make better decisions.
|
||||
// Here we prefer to get the answer quickly, so we choose
|
||||
// ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWER.
|
||||
status = ANeuralNetworksCompilation_setPreference(
|
||||
compilation_, ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWER);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksCompilation_setPreference failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Finish the compilation.
|
||||
status = ANeuralNetworksCompilation_finish(compilation_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksCompilation_finish failed");
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create and initialize the opaque memory objects.
|
||||
*
|
||||
* Opaque memories are suitable for memories that are internal to NNAPI,
|
||||
* e.g. state tensors or intermediate results. Using opaque memories may
|
||||
* reduce the data copying and transformation overhead.
|
||||
*
|
||||
* In this example, only the initial sum, the initial state, and the final sum
|
||||
* are interesting to us. We do not need to know the intermediate results. So,
|
||||
* we create two pairs of opaque memories for intermediate sums and states.
|
||||
*
|
||||
* @return true for success, false otherwise
|
||||
*/
|
||||
bool SimpleSequenceModel::CreateOpaqueMemories() {
|
||||
int32_t status;
|
||||
|
||||
// Create opaque memories for sum tensors.
|
||||
// We start from creating a memory descriptor and describing all of the
|
||||
// intended memory usages.
|
||||
ANeuralNetworksMemoryDesc* sumDesc = nullptr;
|
||||
status = ANeuralNetworksMemoryDesc_create(&sumDesc);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemoryDesc_create failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Specify that the state memory will be used as the first input (sumIn)
|
||||
// of the compilation. Note that the index "0" here means the first operand
|
||||
// of the modelInputs list {sumIn, stateIn}, which means sumIn.
|
||||
status =
|
||||
ANeuralNetworksMemoryDesc_addInputRole(sumDesc, compilation_, 0, 1.0f);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemoryDesc_addInputRole failed");
|
||||
ANeuralNetworksMemoryDesc_free(sumDesc);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Specify that the state memory will also be used as the first output
|
||||
// (sumOut) of the compilation. Note that the index "0" here means the
|
||||
// first operand of the modelOutputs list {sumOut, stateOut}, which means
|
||||
// sumOut.
|
||||
status =
|
||||
ANeuralNetworksMemoryDesc_addOutputRole(sumDesc, compilation_, 0, 1.0f);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemoryDesc_addOutputRole failed");
|
||||
ANeuralNetworksMemoryDesc_free(sumDesc);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Finish the memory descriptor.
|
||||
status = ANeuralNetworksMemoryDesc_finish(sumDesc);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemoryDesc_finish failed");
|
||||
ANeuralNetworksMemoryDesc_free(sumDesc);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Create two opaque memories from the finished descriptor: one for input
|
||||
// and one for output. We will swap the two memories after each single
|
||||
// execution step.
|
||||
status = ANeuralNetworksMemory_createFromDesc(sumDesc, &memoryOpaqueSumIn_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemory_createFromDesc failed for sum memory #1");
|
||||
ANeuralNetworksMemoryDesc_free(sumDesc);
|
||||
return false;
|
||||
}
|
||||
status = ANeuralNetworksMemory_createFromDesc(sumDesc, &memoryOpaqueSumOut_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemory_createFromDesc failed for sum memory #2");
|
||||
ANeuralNetworksMemoryDesc_free(sumDesc);
|
||||
return false;
|
||||
}
|
||||
|
||||
// It is safe to free the memory descriptor once all of the memories have
|
||||
// been created.
|
||||
ANeuralNetworksMemoryDesc_free(sumDesc);
|
||||
|
||||
// Create opaque memories for state tensors.
|
||||
// We start from creating a memory descriptor and describing all of the
|
||||
// intended memory usages.
|
||||
ANeuralNetworksMemoryDesc* stateDesc = nullptr;
|
||||
status = ANeuralNetworksMemoryDesc_create(&stateDesc);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemoryDesc_create failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Specify that the state memory will be used as the second input (stateIn)
|
||||
// of the compilation. Note that the index "1" here means the second operand
|
||||
// of the modelInputs list {sumIn, stateIn}, which means stateIn.
|
||||
status =
|
||||
ANeuralNetworksMemoryDesc_addInputRole(stateDesc, compilation_, 1, 1.0f);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemoryDesc_addInputRole failed");
|
||||
ANeuralNetworksMemoryDesc_free(stateDesc);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Specify that the state memory will also be used as the second output
|
||||
// (stateOut) of the compilation. Note that the index "1" here means the
|
||||
// second operand of the modelOutputs list {sumOut, stateOut}, which means
|
||||
// stateOut.
|
||||
status =
|
||||
ANeuralNetworksMemoryDesc_addOutputRole(stateDesc, compilation_, 1, 1.0f);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemoryDesc_addOutputRole failed");
|
||||
ANeuralNetworksMemoryDesc_free(stateDesc);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Finish the memory descriptor.
|
||||
status = ANeuralNetworksMemoryDesc_finish(stateDesc);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemoryDesc_finish failed");
|
||||
ANeuralNetworksMemoryDesc_free(stateDesc);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Create two opaque memories from the finished descriptor: one for input
|
||||
// and one for output. We will swap the two memories after each single
|
||||
// execution step.
|
||||
status =
|
||||
ANeuralNetworksMemory_createFromDesc(stateDesc, &memoryOpaqueStateIn_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemory_createFromDesc failed for state memory #1");
|
||||
ANeuralNetworksMemoryDesc_free(stateDesc);
|
||||
return false;
|
||||
}
|
||||
status =
|
||||
ANeuralNetworksMemory_createFromDesc(stateDesc, &memoryOpaqueStateOut_);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksMemory_createFromDesc failed for state memory #2");
|
||||
ANeuralNetworksMemoryDesc_free(stateDesc);
|
||||
return false;
|
||||
}
|
||||
|
||||
// It is safe to free the memory descriptor once all of the memories have
|
||||
// been created.
|
||||
ANeuralNetworksMemoryDesc_free(stateDesc);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Dispatch a single computation step of accumulating the geometric progression.
|
||||
*/
|
||||
static bool DispatchSingleStep(
|
||||
ANeuralNetworksCompilation* compilation, ANeuralNetworksMemory* sumIn,
|
||||
uint32_t sumInLength, ANeuralNetworksMemory* stateIn,
|
||||
uint32_t stateInLength, ANeuralNetworksMemory* sumOut,
|
||||
uint32_t sumOutLength, ANeuralNetworksMemory* stateOut,
|
||||
uint32_t stateOutLength, const ANeuralNetworksEvent* waitFor,
|
||||
ANeuralNetworksEvent** event) {
|
||||
// Create an ANeuralNetworksExecution object from the compiled model.
|
||||
ANeuralNetworksExecution* execution;
|
||||
int32_t status = ANeuralNetworksExecution_create(compilation, &execution);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksExecution_create failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Set the memory for the sumIn tensor.
|
||||
// Note that the index "0" here means the first operand of the modelInputs
|
||||
// list {sumIn, stateIn}, which means sumIn.
|
||||
status = ANeuralNetworksExecution_setInputFromMemory(
|
||||
execution, 0, nullptr, sumIn, 0, sumInLength * sizeof(float));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksExecution_setInputFromMemory failed for sumIn");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Set the memory for the stateIn tensor.
|
||||
// Note that the index "1" here means the first operand of the modelInputs
|
||||
// list {sumIn, stateIn}, which means stateIn.
|
||||
status = ANeuralNetworksExecution_setInputFromMemory(
|
||||
execution, 1, nullptr, stateIn, 0, stateInLength * sizeof(float));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksExecution_setInputFromMemory failed for stateIn");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Set the sumOut tensor that will be filled by executing the model.
|
||||
status = ANeuralNetworksExecution_setOutputFromMemory(
|
||||
execution, 0, nullptr, sumOut, 0, sumOutLength * sizeof(float));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksExecution_setOutputFromMemory failed for sumOut");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Set the stateOut tensor that will be filled by executing the model.
|
||||
status = ANeuralNetworksExecution_setOutputFromMemory(
|
||||
execution, 1, nullptr, stateOut, 0, stateOutLength * sizeof(float));
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(
|
||||
ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksExecution_setOutputFromMemory failed for stateOut");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Dispatch the execution of the model.
|
||||
// Note that the execution here is asynchronous with dependencies.
|
||||
const ANeuralNetworksEvent* const* dependencies = nullptr;
|
||||
uint32_t numDependencies = 0;
|
||||
if (waitFor != nullptr) {
|
||||
dependencies = &waitFor;
|
||||
numDependencies = 1;
|
||||
}
|
||||
status = ANeuralNetworksExecution_startComputeWithDependencies(
|
||||
execution, dependencies, numDependencies,
|
||||
0, // infinite timeout duration
|
||||
event);
|
||||
if (status != ANEURALNETWORKS_NO_ERROR) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"ANeuralNetworksExecution_compute failed");
|
||||
return false;
|
||||
}
|
||||
|
||||
ANeuralNetworksExecution_free(execution);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compute the sum of a geometric progression.
|
||||
*
|
||||
* @param initialValue the initial value of the geometric progression
|
||||
* @param steps the number of terms to accumulate
|
||||
* @return computed result, or 0.0f if there is error.
|
||||
*/
|
||||
bool SimpleSequenceModel::Compute(float initialValue, uint32_t steps,
|
||||
float* result) {
|
||||
if (!result) {
|
||||
return false;
|
||||
}
|
||||
if (steps == 0) {
|
||||
*result = 0.0f;
|
||||
return true;
|
||||
}
|
||||
|
||||
// Setup initial values.
|
||||
// In reality, the values in the shared memory region will be manipulated by
|
||||
// other modules or processes.
|
||||
fillMemory(sumInFd_, tensorSize_, 0);
|
||||
fillMemory(initialStateFd_, tensorSize_, initialValue);
|
||||
|
||||
// The event objects for all computation steps.
|
||||
std::vector<ANeuralNetworksEvent*> events(steps, nullptr);
|
||||
|
||||
for (uint32_t i = 0; i < steps; i++) {
|
||||
// We will only use ASharedMemory for boundary step executions, and use
|
||||
// opaque memories for intermediate results to minimize the data copying.
|
||||
// Note that when setting an opaque memory as the input or output of an
|
||||
// execution, the offset and length must be set to 0 to indicate the
|
||||
// entire memory region is used.
|
||||
ANeuralNetworksMemory* sumInMemory;
|
||||
ANeuralNetworksMemory* sumOutMemory;
|
||||
ANeuralNetworksMemory* stateInMemory;
|
||||
ANeuralNetworksMemory* stateOutMemory;
|
||||
uint32_t sumInLength, sumOutLength, stateInLength, stateOutLength;
|
||||
if (i == 0) {
|
||||
sumInMemory = memorySumIn_;
|
||||
sumInLength = tensorSize_;
|
||||
stateInMemory = memoryInitialState_;
|
||||
stateInLength = tensorSize_;
|
||||
} else {
|
||||
sumInMemory = memoryOpaqueSumIn_;
|
||||
sumInLength = 0;
|
||||
stateInMemory = memoryOpaqueStateIn_;
|
||||
stateInLength = 0;
|
||||
}
|
||||
if (i == steps - 1) {
|
||||
sumOutMemory = memorySumOut_;
|
||||
sumOutLength = tensorSize_;
|
||||
} else {
|
||||
sumOutMemory = memoryOpaqueSumOut_;
|
||||
sumOutLength = 0;
|
||||
}
|
||||
stateOutMemory = memoryOpaqueStateOut_;
|
||||
stateOutLength = 0;
|
||||
|
||||
// Dispatch a single computation step with a dependency on the previous
|
||||
// step, if any. The actual computation will start once its dependency has
|
||||
// finished.
|
||||
const ANeuralNetworksEvent* waitFor = i == 0 ? nullptr : events[i - 1];
|
||||
if (!DispatchSingleStep(compilation_, sumInMemory, sumInLength,
|
||||
stateInMemory, stateInLength, sumOutMemory,
|
||||
sumOutLength, stateOutMemory, stateOutLength,
|
||||
waitFor, &events[i])) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
|
||||
"DispatchSingleStep failed for step %d", i);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Swap the memory handles: the outputs from the current step execution
|
||||
// will be fed in as the inputs of the next step execution.
|
||||
std::swap(memoryOpaqueSumIn_, memoryOpaqueSumOut_);
|
||||
std::swap(memoryOpaqueStateIn_, memoryOpaqueStateOut_);
|
||||
}
|
||||
|
||||
// Since the events are chained, we only need to wait for the last one.
|
||||
ANeuralNetworksEvent_wait(events.back());
|
||||
|
||||
// Get the results.
|
||||
float* outputTensorPtr =
|
||||
reinterpret_cast<float*>(mmap(nullptr, tensorSize_ * sizeof(float),
|
||||
PROT_READ, MAP_SHARED, sumOutFd_, 0));
|
||||
*result = outputTensorPtr[0];
|
||||
munmap(outputTensorPtr, tensorSize_ * sizeof(float));
|
||||
|
||||
// Cleanup event objects.
|
||||
for (auto* event : events) {
|
||||
ANeuralNetworksEvent_free(event);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* SimpleSequenceModel Destructor.
|
||||
*
|
||||
* Release NN API objects and close the file descriptors.
|
||||
*/
|
||||
SimpleSequenceModel::~SimpleSequenceModel() {
|
||||
ANeuralNetworksCompilation_free(compilation_);
|
||||
ANeuralNetworksModel_free(model_);
|
||||
|
||||
ANeuralNetworksMemory_free(memorySumIn_);
|
||||
ANeuralNetworksMemory_free(memorySumOut_);
|
||||
ANeuralNetworksMemory_free(memoryInitialState_);
|
||||
ANeuralNetworksMemory_free(memoryRatio_);
|
||||
close(initialStateFd_);
|
||||
close(sumInFd_);
|
||||
close(sumOutFd_);
|
||||
close(ratioFd_);
|
||||
|
||||
ANeuralNetworksMemory_free(memoryOpaqueStateIn_);
|
||||
ANeuralNetworksMemory_free(memoryOpaqueStateOut_);
|
||||
ANeuralNetworksMemory_free(memoryOpaqueSumIn_);
|
||||
ANeuralNetworksMemory_free(memoryOpaqueSumOut_);
|
||||
}
|
||||
@@ -1,87 +0,0 @@
|
||||
/**
|
||||
* Copyright 2020 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#ifndef NNAPI_SIMPLE_MODEL_H
|
||||
#define NNAPI_SIMPLE_MODEL_H
|
||||
|
||||
// #include "neuralnetworks_wrapper.h"
|
||||
#include <android/NeuralNetworks.h>
|
||||
|
||||
#include <memory>
|
||||
|
||||
/**
|
||||
* SimpleSequenceModel
|
||||
* Build up the hardcoded graph of
|
||||
*
|
||||
* sumIn ---+
|
||||
* +--- ADD ---> sumOut
|
||||
* stateIn ---+
|
||||
* +--- MUL ---> stateOut
|
||||
* ratio ---+
|
||||
*
|
||||
* Operands are all 2-D TENSOR_FLOAT32 of:
|
||||
* dimLength x dimLength
|
||||
* with NO fused_activation operation
|
||||
*
|
||||
* This graph is used for computing a single step of accumulating a finite
|
||||
* geometry progression.
|
||||
*
|
||||
*/
|
||||
class SimpleSequenceModel {
|
||||
public:
|
||||
static std::unique_ptr<SimpleSequenceModel> Create(float ratio);
|
||||
|
||||
// Prefer using SimpleSequenceModel::Create.
|
||||
explicit SimpleSequenceModel(float ratio);
|
||||
~SimpleSequenceModel();
|
||||
|
||||
bool Compute(float initialValue, uint32_t steps, float* result);
|
||||
|
||||
private:
|
||||
bool CreateSharedMemories();
|
||||
bool CreateModel();
|
||||
bool CreateCompilation();
|
||||
bool CreateOpaqueMemories();
|
||||
|
||||
ANeuralNetworksModel* model_ = nullptr;
|
||||
ANeuralNetworksCompilation* compilation_ = nullptr;
|
||||
|
||||
static constexpr uint32_t dimLength_ = 200;
|
||||
static constexpr uint32_t tensorSize_ = dimLength_ * dimLength_;
|
||||
|
||||
const float ratio_;
|
||||
|
||||
// ASharedMemories. In reality, the values in the shared memory region will
|
||||
// be manipulated by other modules or processes.
|
||||
int initialStateFd_ = -1;
|
||||
int ratioFd_ = -1;
|
||||
int sumInFd_ = -1;
|
||||
int sumOutFd_ = -1;
|
||||
ANeuralNetworksMemory* memoryInitialState_ = nullptr;
|
||||
ANeuralNetworksMemory* memoryRatio_ = nullptr;
|
||||
ANeuralNetworksMemory* memorySumIn_ = nullptr;
|
||||
ANeuralNetworksMemory* memorySumOut_ = nullptr;
|
||||
|
||||
// Opaque memories.
|
||||
ANeuralNetworksMemory* memoryOpaqueStateIn_ = nullptr;
|
||||
ANeuralNetworksMemory* memoryOpaqueStateOut_ = nullptr;
|
||||
ANeuralNetworksMemory* memoryOpaqueSumIn_ = nullptr;
|
||||
ANeuralNetworksMemory* memoryOpaqueSumOut_ = nullptr;
|
||||
};
|
||||
|
||||
#define LOG_TAG "NNAPI_SEQUENCE"
|
||||
|
||||
#endif // NNAPI_SIMPLE_MODEL_H
|
||||
@@ -1,139 +0,0 @@
|
||||
/**
|
||||
* Copyright 2020 The Android Open Source Project
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package com.example.android.sequence;
|
||||
|
||||
import android.app.Activity;
|
||||
import android.os.AsyncTask;
|
||||
import android.os.Bundle;
|
||||
import android.util.Log;
|
||||
import android.view.View;
|
||||
import android.widget.Button;
|
||||
import android.widget.EditText;
|
||||
import android.widget.TextView;
|
||||
import android.widget.Toast;
|
||||
|
||||
public class MainActivity extends Activity {
|
||||
// Used to load the 'native-lib' library on application startup.
|
||||
static { System.loadLibrary("sequence"); }
|
||||
|
||||
private final String LOG_TAG = "NNAPI_SEQUENCE";
|
||||
private long modelHandle = 0;
|
||||
|
||||
public native long initModel(float ratio);
|
||||
|
||||
public native float compute(float initialValue, int steps, long modelHandle);
|
||||
|
||||
public native void destroyModel(long modelHandle);
|
||||
|
||||
@Override
|
||||
protected void onCreate(Bundle savedInstanceState) {
|
||||
super.onCreate(savedInstanceState);
|
||||
setContentView(R.layout.activity_main);
|
||||
|
||||
Button resetButton = findViewById(R.id.reset_button);
|
||||
resetButton.setOnClickListener(new View.OnClickListener() {
|
||||
@Override
|
||||
public void onClick(View v) {
|
||||
EditText ratioInput = findViewById(R.id.ratio_input);
|
||||
String ratioStr = ratioInput.getText().toString();
|
||||
if (ratioStr.isEmpty()) {
|
||||
Toast.makeText(getApplicationContext(), "Invalid ratio!", Toast.LENGTH_SHORT)
|
||||
.show();
|
||||
return;
|
||||
}
|
||||
|
||||
if (modelHandle != 0) {
|
||||
destroyModel(modelHandle);
|
||||
modelHandle = 0;
|
||||
}
|
||||
TextView ratioText = findViewById(R.id.ratio_text);
|
||||
TextView resultText = findViewById(R.id.result_text);
|
||||
ratioText.setText(ratioStr);
|
||||
resultText.setText(R.string.none);
|
||||
new InitModelTask().execute(Float.valueOf(ratioStr));
|
||||
}
|
||||
});
|
||||
|
||||
Button computeButton = findViewById(R.id.compute_button);
|
||||
computeButton.setOnClickListener(new View.OnClickListener() {
|
||||
@Override
|
||||
public void onClick(View v) {
|
||||
if (modelHandle != 0) {
|
||||
EditText initialValueInput = findViewById(R.id.initial_value_input);
|
||||
EditText stepsInput = findViewById(R.id.steps_input);
|
||||
String initialValueStr = initialValueInput.getText().toString();
|
||||
String stepsStr = stepsInput.getText().toString();
|
||||
if (initialValueStr.isEmpty() || stepsStr.isEmpty()) {
|
||||
Toast.makeText(getApplicationContext(), "Invalid initial value or steps!",
|
||||
Toast.LENGTH_SHORT)
|
||||
.show();
|
||||
return;
|
||||
}
|
||||
new ComputeTask().execute(initialValueStr, stepsStr);
|
||||
} else {
|
||||
Toast.makeText(getApplicationContext(), "Model has not been initialized!",
|
||||
Toast.LENGTH_SHORT)
|
||||
.show();
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void onDestroy() {
|
||||
if (modelHandle != 0) {
|
||||
destroyModel(modelHandle);
|
||||
modelHandle = 0;
|
||||
}
|
||||
super.onDestroy();
|
||||
}
|
||||
|
||||
private class InitModelTask extends AsyncTask<Float, Void, Long> {
|
||||
@Override
|
||||
protected Long doInBackground(Float... inputs) {
|
||||
if (inputs.length != 1) {
|
||||
Log.e(LOG_TAG, "Incorrect number of input values");
|
||||
return 0L;
|
||||
}
|
||||
// Prepare the model in a separate thread.
|
||||
return initModel(inputs[0]);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void onPostExecute(Long result) {
|
||||
modelHandle = result;
|
||||
}
|
||||
}
|
||||
|
||||
private class ComputeTask extends AsyncTask<String, Void, Float> {
|
||||
@Override
|
||||
protected Float doInBackground(String... inputs) {
|
||||
if (inputs.length != 2) {
|
||||
Log.e(LOG_TAG, "Incorrect number of input values");
|
||||
return 0.0f;
|
||||
}
|
||||
// Reusing the same prepared model with different inputs.
|
||||
return compute(Float.valueOf(inputs[0]), Integer.valueOf(inputs[1]), modelHandle);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void onPostExecute(Float result) {
|
||||
TextView tv = findViewById(R.id.result_text);
|
||||
tv.setText(String.valueOf(result));
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,34 +0,0 @@
|
||||
<vector xmlns:android="http://schemas.android.com/apk/res/android"
|
||||
xmlns:aapt="http://schemas.android.com/aapt"
|
||||
android:width="108dp"
|
||||
android:height="108dp"
|
||||
android:viewportHeight="108"
|
||||
android:viewportWidth="108">
|
||||
<path
|
||||
android:fillType="evenOdd"
|
||||
android:pathData="M32,64C32,64 38.39,52.99 44.13,50.95C51.37,48.37 70.14,49.57 70.14,49.57L108.26,87.69L108,109.01L75.97,107.97L32,64Z"
|
||||
android:strokeColor="#00000000"
|
||||
android:strokeWidth="1">
|
||||
<aapt:attr name="android:fillColor">
|
||||
<gradient
|
||||
android:endX="78.5885"
|
||||
android:endY="90.9159"
|
||||
android:startX="48.7653"
|
||||
android:startY="61.0927"
|
||||
android:type="linear">
|
||||
<item
|
||||
android:color="#44000000"
|
||||
android:offset="0.0" />
|
||||
<item
|
||||
android:color="#00000000"
|
||||
android:offset="1.0" />
|
||||
</gradient>
|
||||
</aapt:attr>
|
||||
</path>
|
||||
<path
|
||||
android:fillColor="#FFFFFF"
|
||||
android:fillType="nonZero"
|
||||
android:pathData="M66.94,46.02L66.94,46.02C72.44,50.07 76,56.61 76,64L32,64C32,56.61 35.56,50.11 40.98,46.06L36.18,41.19C35.45,40.45 35.45,39.3 36.18,38.56C36.91,37.81 38.05,37.81 38.78,38.56L44.25,44.05C47.18,42.57 50.48,41.71 54,41.71C57.48,41.71 60.78,42.57 63.68,44.05L69.11,38.56C69.84,37.81 70.98,37.81 71.71,38.56C72.44,39.3 72.44,40.45 71.71,41.19L66.94,46.02ZM62.94,56.92C64.08,56.92 65,56.01 65,54.88C65,53.76 64.08,52.85 62.94,52.85C61.8,52.85 60.88,53.76 60.88,54.88C60.88,56.01 61.8,56.92 62.94,56.92ZM45.06,56.92C46.2,56.92 47.13,56.01 47.13,54.88C47.13,53.76 46.2,52.85 45.06,52.85C43.92,52.85 43,53.76 43,54.88C43,56.01 43.92,56.92 45.06,56.92Z"
|
||||
android:strokeColor="#00000000"
|
||||
android:strokeWidth="1" />
|
||||
</vector>
|
||||
@@ -1,170 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<vector xmlns:android="http://schemas.android.com/apk/res/android"
|
||||
android:width="108dp"
|
||||
android:height="108dp"
|
||||
android:viewportHeight="108"
|
||||
android:viewportWidth="108">
|
||||
<path
|
||||
android:fillColor="#26A69A"
|
||||
android:pathData="M0,0h108v108h-108z" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M9,0L9,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,0L19,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M29,0L29,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M39,0L39,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M49,0L49,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M59,0L59,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M69,0L69,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M79,0L79,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M89,0L89,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M99,0L99,108"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,9L108,9"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,19L108,19"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,29L108,29"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,39L108,39"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,49L108,49"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,59L108,59"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,69L108,69"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,79L108,79"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,89L108,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M0,99L108,99"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,29L89,29"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,39L89,39"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,49L89,49"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,59L89,59"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,69L89,69"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M19,79L89,79"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M29,19L29,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M39,19L39,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M49,19L49,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M59,19L59,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M69,19L69,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
<path
|
||||
android:fillColor="#00000000"
|
||||
android:pathData="M79,19L79,89"
|
||||
android:strokeColor="#33FFFFFF"
|
||||
android:strokeWidth="0.8" />
|
||||
</vector>
|
||||
@@ -1,179 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
|
||||
xmlns:app="http://schemas.android.com/apk/res-auto"
|
||||
xmlns:tools="http://schemas.android.com/tools"
|
||||
android:layout_width="match_parent"
|
||||
android:layout_height="match_parent"
|
||||
tools:context="com.example.android.sequence.MainActivity">
|
||||
|
||||
<Button
|
||||
android:id="@+id/compute_button"
|
||||
android:layout_width="wrap_content"
|
||||
android:layout_height="wrap_content"
|
||||
android:layout_marginTop="8dp"
|
||||
android:layout_marginBottom="52dp"
|
||||
android:text="@string/compute"
|
||||
app:layout_constraintBottom_toBottomOf="parent"
|
||||
app:layout_constraintEnd_toEndOf="parent"
|
||||
app:layout_constraintStart_toStartOf="parent"
|
||||
app:layout_constraintTop_toBottomOf="@+id/result_text"
|
||||
tools:text="@string/compute" />
|
||||
|
||||
<Button
|
||||
android:id="@+id/reset_button"
|
||||
android:layout_width="wrap_content"
|
||||
android:layout_height="wrap_content"
|
||||
android:layout_marginBottom="52dp"
|
||||
android:text="@string/reset"
|
||||
app:layout_constraintBottom_toBottomOf="parent"
|
||||
app:layout_constraintEnd_toEndOf="parent"
|
||||
app:layout_constraintHorizontal_bias="0.464"
|
||||
app:layout_constraintStart_toStartOf="parent"
|
||||
app:layout_constraintTop_toTopOf="parent"
|
||||
app:layout_constraintVertical_bias="0.174"
|
||||
tools:text="@string/reset" />
|
||||
|
||||
<EditText
|
||||
android:id="@+id/initial_value_input"
|
||||
android:layout_width="161dp"
|
||||
android:layout_height="wrap_content"
|
||||
android:layout_marginTop="264dp"
|
||||
android:layout_marginEnd="92dp"
|
||||
android:ems="10"
|
||||
android:inputType="numberDecimal"
|
||||
android:textAlignment="center"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="parent"
|
||||
app:layout_constraintTop_toTopOf="parent" />
|
||||
<EditText
|
||||
android:id="@+id/steps_input"
|
||||
android:layout_width="161dp"
|
||||
android:layout_height="wrap_content"
|
||||
android:layout_marginTop="316dp"
|
||||
android:layout_marginEnd="88dp"
|
||||
android:ems="10"
|
||||
android:inputType="number"
|
||||
android:textAlignment="center"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="parent"
|
||||
app:layout_constraintTop_toTopOf="parent" />
|
||||
<TextView
|
||||
android:id="@+id/steps_label"
|
||||
android:layout_width="wrap_content"
|
||||
android:layout_height="32dp"
|
||||
android:layout_marginStart="8dp"
|
||||
android:layout_marginTop="248dp"
|
||||
android:text="@string/steps"
|
||||
android:textAppearance="@android:style/TextAppearance"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="@+id/ratio_input_label"
|
||||
app:layout_constraintHorizontal_bias="1.0"
|
||||
app:layout_constraintStart_toStartOf="@+id/ratio_input_label"
|
||||
app:layout_constraintTop_toBottomOf="@+id/ratio_input_label"
|
||||
tools:text="@string/steps" />
|
||||
<TextView
|
||||
android:id="@+id/initial_value_label"
|
||||
android:layout_width="wrap_content"
|
||||
android:layout_height="32dp"
|
||||
android:layout_marginStart="8dp"
|
||||
android:layout_marginTop="196dp"
|
||||
android:text="@string/initial_value"
|
||||
android:textAppearance="@android:style/TextAppearance"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="@+id/ratio_input_label"
|
||||
app:layout_constraintHorizontal_bias="0.966"
|
||||
app:layout_constraintStart_toStartOf="@+id/ratio_input_label"
|
||||
app:layout_constraintTop_toBottomOf="@+id/ratio_input_label"
|
||||
tools:text="@string/initial_value" />
|
||||
<EditText
|
||||
android:id="@+id/ratio_input"
|
||||
android:layout_width="161dp"
|
||||
android:layout_height="wrap_content"
|
||||
android:layout_marginStart="8dp"
|
||||
android:layout_marginTop="40dp"
|
||||
android:ems="10"
|
||||
android:inputType="numberDecimal"
|
||||
android:textAlignment="center"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="@+id/initial_value_input"
|
||||
app:layout_constraintStart_toStartOf="@+id/initial_value_input"
|
||||
app:layout_constraintTop_toTopOf="parent" />
|
||||
<TextView
|
||||
android:id="@+id/ratio_input_label"
|
||||
android:layout_width="wrap_content"
|
||||
android:layout_height="32dp"
|
||||
android:layout_marginStart="84dp"
|
||||
android:layout_marginTop="52dp"
|
||||
android:text="@string/ratio"
|
||||
android:textAppearance="@android:style/TextAppearance"
|
||||
android:textSize="18sp"
|
||||
android:visibility="visible"
|
||||
app:layout_constraintStart_toStartOf="parent"
|
||||
app:layout_constraintTop_toTopOf="parent"
|
||||
tools:text="@string/ratio" />
|
||||
<TextView
|
||||
android:id="@+id/ratio_text"
|
||||
android:layout_width="161dp"
|
||||
android:layout_height="32dp"
|
||||
android:layout_marginStart="18dp"
|
||||
android:layout_marginTop="136dp"
|
||||
android:layout_marginEnd="8dp"
|
||||
android:text="@string/none"
|
||||
android:textAlignment="center"
|
||||
android:textAllCaps="false"
|
||||
android:textAppearance="@android:style/TextAppearance"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="@+id/ratio_input"
|
||||
app:layout_constraintHorizontal_bias="0.652"
|
||||
app:layout_constraintStart_toEndOf="@+id/result_label"
|
||||
app:layout_constraintStart_toStartOf="@+id/ratio_input"
|
||||
app:layout_constraintTop_toBottomOf="@+id/ratio_input"
|
||||
tools:text="@string/none" />
|
||||
<TextView
|
||||
android:id="@+id/ratio_text_label"
|
||||
android:layout_width="wrap_content"
|
||||
android:layout_height="32dp"
|
||||
android:layout_marginStart="8dp"
|
||||
android:layout_marginTop="140dp"
|
||||
android:text="@string/ratio"
|
||||
android:textAppearance="@android:style/TextAppearance"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="@+id/ratio_input_label"
|
||||
app:layout_constraintHorizontal_bias="1.0"
|
||||
app:layout_constraintStart_toStartOf="@+id/ratio_input_label"
|
||||
app:layout_constraintTop_toBottomOf="@+id/ratio_input_label"
|
||||
tools:text="@string/ratio" />
|
||||
<TextView
|
||||
android:id="@+id/result_text"
|
||||
android:layout_width="161dp"
|
||||
android:layout_height="32dp"
|
||||
android:layout_marginStart="18dp"
|
||||
android:layout_marginTop="292dp"
|
||||
android:layout_marginEnd="8dp"
|
||||
android:text="@string/none"
|
||||
android:textAlignment="center"
|
||||
android:textAllCaps="false"
|
||||
android:textAppearance="@android:style/TextAppearance"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="@+id/ratio_input"
|
||||
app:layout_constraintHorizontal_bias="0.957"
|
||||
app:layout_constraintStart_toEndOf="@+id/result_label"
|
||||
app:layout_constraintStart_toStartOf="@+id/ratio_input"
|
||||
app:layout_constraintTop_toBottomOf="@+id/ratio_input"
|
||||
tools:text="@string/none" />
|
||||
<TextView
|
||||
android:id="@+id/result_label"
|
||||
android:layout_width="wrap_content"
|
||||
android:layout_height="32dp"
|
||||
android:layout_marginStart="8dp"
|
||||
android:layout_marginTop="296dp"
|
||||
android:text="@string/result"
|
||||
android:textAppearance="@android:style/TextAppearance"
|
||||
android:textSize="18sp"
|
||||
app:layout_constraintEnd_toEndOf="@+id/ratio_input_label"
|
||||
app:layout_constraintHorizontal_bias="0.941"
|
||||
app:layout_constraintStart_toStartOf="@+id/ratio_input_label"
|
||||
app:layout_constraintTop_toBottomOf="@+id/ratio_input_label"
|
||||
tools:text="@string/result" />
|
||||
|
||||
</androidx.constraintlayout.widget.ConstraintLayout>
|
||||
@@ -1,5 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android">
|
||||
<background android:drawable="@drawable/ic_launcher_background" />
|
||||
<foreground android:drawable="@drawable/ic_launcher_foreground" />
|
||||
</adaptive-icon>
|
||||
@@ -1,5 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android">
|
||||
<background android:drawable="@drawable/ic_launcher_background" />
|
||||
<foreground android:drawable="@drawable/ic_launcher_foreground" />
|
||||
</adaptive-icon>
|
||||
|
Before Width: | Height: | Size: 3.0 KiB |
|
Before Width: | Height: | Size: 4.9 KiB |
|
Before Width: | Height: | Size: 2.0 KiB |
|
Before Width: | Height: | Size: 2.8 KiB |
|
Before Width: | Height: | Size: 4.5 KiB |
|
Before Width: | Height: | Size: 6.9 KiB |
|
Before Width: | Height: | Size: 6.3 KiB |
|
Before Width: | Height: | Size: 10 KiB |
|
Before Width: | Height: | Size: 9.0 KiB |
|
Before Width: | Height: | Size: 15 KiB |
@@ -1,6 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<resources>
|
||||
<color name="colorPrimary">#3F51B5</color>
|
||||
<color name="colorPrimaryDark">#303F9F</color>
|
||||
<color name="colorAccent">#FF4081</color>
|
||||
</resources>
|
||||
@@ -1,10 +0,0 @@
|
||||
<resources>
|
||||
<string name="app_name">NN API Demo: sequence</string>
|
||||
<string name="compute">Compute</string>
|
||||
<string name="reset">Reset</string>
|
||||
<string name="result">Result: </string>
|
||||
<string name="initial_value">Initial Value: </string>
|
||||
<string name="ratio">Ratio: </string>
|
||||
<string name="steps">Steps: </string>
|
||||
<string name="none">None</string>
|
||||
</resources>
|
||||
@@ -1,8 +0,0 @@
|
||||
<resources>
|
||||
|
||||
<!-- Base application theme. -->
|
||||
<style name="AppTheme" parent="android:Theme.Material.Light.DarkActionBar">
|
||||
<!-- Customize your theme here. -->
|
||||
</style>
|
||||
|
||||
</resources>
|
||||
@@ -40,8 +40,6 @@ include(":native-audio:app")
|
||||
include(":native-codec:app")
|
||||
include(":native-midi:app")
|
||||
include(":native-plasma:app")
|
||||
include(":nn-samples:basic")
|
||||
include(":nn-samples:sequence")
|
||||
include(":orderfile:app")
|
||||
include(":prefab:curl-ssl:app")
|
||||
include(":prefab:prefab-dependency:app")
|
||||
|
||||