24 Commits

Author SHA1 Message Date
Dan Albert
5dd0e384fe Increase warning level in unit-test. 2025-09-18 00:34:31 +00:00
Dan Albert
4cbc8c4abf Increase warning level in textured-teapot.
And fix the bug that the warning found, where loading the texture image
would fail if the read couldn't complete in a single call.
2025-09-17 23:59:21 +00:00
Dan Albert
431d0f4a22 Increase warning level in more-teapots. 2025-09-17 23:59:21 +00:00
Dan Albert
f457ffa9fc Increase warning level in classic-teapot. 2025-09-17 23:59:21 +00:00
Dan Albert
9e3341b165 Increase warning level in image-decoder. 2025-09-17 23:59:21 +00:00
Dan Albert
b5dc046a28 Increase warning level in choreographer-30fps. 2025-09-17 23:59:21 +00:00
Dan Albert
d55c85d6f4 Increase warning level in teapots/common/ndk_helper.
I'm ignoring the deeper questions of why some of these seemingly
important arguments are unused because this whole directory is destined
for the scrap heap, but until I've actually finished writing that new
merged graphics sample, this will stick around.
2025-09-17 23:59:21 +00:00
Dan Albert
e1dd22594d Remove unhelpful prefab samples.
These don't demonstrate any NDK behavior, just AGP behavior. The
dependency management they show is also shown elsewhere since we rely on
these features for sharing code between samples anyway.
2025-09-17 23:57:35 +00:00
Dan Albert
6df22d6fef Increase warning level in orderfile sample. 2025-09-17 23:55:22 +00:00
Dan Albert
ceb0acc5e8 Increase warning level in sanitizers sample.
It's a little bit strange to do this in a sample that's intentionally
doing bad things, but not *these* wrong things, so it's still helpful to
move the sample to `add_app_library` so we can apply consistent
behaviors throughout the repo.
2025-09-17 23:54:58 +00:00
Dan Albert
0a3d7e2ecd Cleanup santiizers CMakeLists.txt.
Remove all the boiler plate comments and pointless find_library
indirection for liblog.
2025-09-17 23:54:58 +00:00
Dan Albert
7308e9c8d2 Increase warning level in sensorgraph. 2025-09-17 23:51:11 +00:00
Dan Albert
fa71ca4019 Update the native-activity README.
I changed what this sample does but forgot to update the docs.
2025-09-17 23:11:18 +00:00
Dan Albert
3d0e23ed75 Remove san-angeles sample.
This is just another OpenGL sample, and a very old one at that. It
wasn't even originally written for Android, but was ported from asm to
OpenGL for desktop and now this is the Android port (though it still
includes the win32 code!). It's also not doing anything useful that the
other GL samples aren't.
2025-09-17 22:59:41 +00:00
Dan Albert
c890b01187 Add READMEs to the deleted samples directories.
It's not perfect because any stale links people follow that link to
anything but this README will still be a 404, but it's at least
something people can find if they walk up the dead link to the top of
the sample's directory.
2025-09-17 20:51:26 +00:00
Dan Albert
c2c70d4364 Remove native-plasma.
This is just native-activity with extra steps.

Fixes https://github.com/android/ndk-samples/issues/1139.
2025-09-17 20:51:04 +00:00
Dan Albert
d5f29a739f Simplify native-activity graphics code.
We don't need OpenGL just to do a color fill. Save that for the OpenGL
samples. Replace that with a trivial AHardwareBuffer based color fill.

I've altered the old animation which didn't really work the way I think
it was intended to. It looks like someone had intended for the
accelerometer to alter the color, but it actually only animated between
black and bright green. I've removed the sensor code as superfluous
anyway, so just do something simpler: switch from red, to green, to blue
every second. I kept some amount of animation rather than just clearing
a solid color because if we only draw once and have no update cycle then
choreographer and most of the event handling code also becomes useless,
and without that the sample no longer bears any resemblance to a real
app.

https://github.com/android/ndk-samples/issues/1139
2025-09-17 20:51:04 +00:00
Dan Albert
556e6041fd Remove useless comments. 2025-09-17 20:51:04 +00:00
Dan Albert
11d93063e4 Remove pointless sensor code from native-activity.
We already have a separate sensor-graph sample. There's no need to
complicate this NativeActivity hello world with accelerometer logspam.

https://github.com/android/ndk-samples/issues/1139
2025-09-17 20:51:04 +00:00
Dan Albert
f03dfd8068 Remove Neural Networks samples.
This API is deprecated in favor of TFLite:
https://developer.android.com/ndk/guides/neuralnetworks/migration-guide.
TFLite has their own docs and samples, and isn't an NDK API anyway so we
don't need to replace these samples with TFLite samples. Just delete the
thing we're recommending against so people don't get confused into
following bad advice.
2025-09-17 18:54:56 +00:00
Dan Albert
28e7d7bf57 Remove native-media sample.
This is essentially an OpenMAX AL duplicate for native-codec. OpenMAX AL
isn't deprecated, but it's not recommended either. I'm removing the
sample because new code should prefer to use the NDK Media Codec APIs as
shown in the native-codec sample.
2025-09-17 18:54:12 +00:00
Dan Albert
caa7866a0c Add missing project to native-codec.
I'd missed this one somehow when I fixed all the others. Add it to shut
up the warning.
2025-09-16 00:04:48 +00:00
Dan Albert
fff222e9b4 Increase warning level in native-audio. 2025-09-16 00:02:24 +00:00
Dan Albert
302f10ad86 Migrate native-audio to C++.
I was originally thinking we'd delete this sample in favor of
hello-oboe, since this is using OpenSLES, but it also has quite a few
more features. I think I may actually go the other way on that: migrate
this sample to Oboe and delete hello-oboe.
2025-09-16 00:02:24 +00:00
223 changed files with 328 additions and 13628 deletions

6
hello-neon/README.md Normal file
View File

@@ -0,0 +1,6 @@
# Sample removed
This sample has been removed in favor of the new `vectorization` sample, which
demonstrates more options for vectorization and includes a benchmark to
highlight that performance decisions cannot be made correctly unless you measure
them.

View File

@@ -1,8 +1,24 @@
# Native Activity
Native Activity is an Android sample that initializes a GLES 2.0 context and
reads accelerometer data from C code using
[Native Activity](http://developer.android.com/reference/android/app/NativeActivity.html).
This is an Android sample that uses [NativeActivity] with `native_app_glue`,
which enables building NDK apps without having to write any Java code. In
practice most apps, even games which are predominantly native code, will need to
call some Java APIs or customize their app's activity further.
The more modern approach to this is to use [GameActivity], which has all the
same benefits as `NativeActivity` and `native_app_glue`, while also making it
easier to include Java code in your app without a rewrite later. It's also
source compatible. This sample will likely migrate to `GameActivity` in the
future.
The app here is intentionally quite simple, aiming to show the core event and
draw loop necessary for an app using `native_app_glue` without any extra
clutter. It uses `AChoreographer` to manage the update/render loop, and uses
`ANativeWindow` and `AHardwareBuffer` to update the screen with a simple color
clear.
[GameActivity]: https://developer.android.com/games/agdk/game-activity
[NativeActivity]: http://developer.android.com/reference/android/app/NativeActivity.html
This sample uses the new
[Android Studio CMake plugin](http://tools.android.com/tech-docs/external-c-builds)

View File

@@ -32,12 +32,10 @@ target_include_directories(native-activity PRIVATE
target_link_libraries(native-activity
android
$<LINK_LIBRARY:WHOLE_ARCHIVE,native_app_glue>
EGL
GLESv1_CM
log
)
if (ANDROID_ABI STREQUAL riscv64)
if(ANDROID_ABI STREQUAL riscv64)
# This sample uses Sensor Manager and Choreographer APIs which are
# deprecated in modern API levels. Our minSdkVersion is 21, but we also
# build for riscv64, which isn't a supported ABI yet and so that
@@ -48,4 +46,4 @@ if (ANDROID_ABI STREQUAL riscv64)
# now, just disable the deprecation warnings so we can get the riscv64
# samples building in CI, and we can come back to clean that up later.
target_compile_options(native-activity PRIVATE -Wno-deprecated-declarations)
endif ()
endif()

View File

@@ -15,23 +15,19 @@
*
*/
// BEGIN_INCLUDE(all)
#include <EGL/egl.h>
#include <GLES/gl.h>
#include <android/choreographer.h>
#include <android/hardware_buffer.h>
#include <android/log.h>
#include <android/sensor.h>
#include <android/native_window.h>
#include <android/set_abort_message.h>
#include <android_native_app_glue.h>
#include <jni.h>
#include <cassert>
#include <cerrno>
#include <chrono>
#include <cstdlib>
#include <cstring>
#include <initializer_list>
#include <memory>
using namespace std::literals::chrono_literals;
#define LOG_TAG "native-activity"
#define _LOG(priority, fmt, ...) \
@@ -66,46 +62,35 @@
} \
} while (false)
/**
* Our saved state data.
*/
struct SavedState {
float angle;
int32_t x;
int32_t y;
// Note: little endian, the opposite of normal hex color codes. ABGR, rather
// than RGBA.
enum class Color : uint32_t {
kRed = 0x000000ff,
kGreen = 0x0000ff00,
kBlue = 0x00ff0000,
};
/**
* Shared state for our app.
*/
struct Engine {
android_app* app;
class Engine {
public:
explicit Engine(android_app* app) : app_(app) {}
ASensorManager* sensorManager;
const ASensor* accelerometerSensor;
ASensorEventQueue* sensorEventQueue;
EGLDisplay display;
EGLSurface surface;
EGLContext context;
int32_t width;
int32_t height;
SavedState state;
void CreateSensorListener(ALooper_callbackFunc callback) {
CHECK_NOT_NULL(app);
sensorManager = ASensorManager_getInstance();
if (sensorManager == nullptr) {
void AttachWindow() {
if (ANativeWindow_setBuffersGeometry(
app_->window, 0, 0, AHARDWAREBUFFER_FORMAT_R8G8B8X8_UNORM) < 0) {
LOGE("Unable to set window buffer geometry");
window_initialized = false;
return;
}
accelerometerSensor = ASensorManager_getDefaultSensor(
sensorManager, ASENSOR_TYPE_ACCELEROMETER);
sensorEventQueue = ASensorManager_createEventQueue(
sensorManager, app->looper, ALOOPER_POLL_CALLBACK, callback, this);
window_initialized = true;
color_ = Color::kRed;
last_update_ = std::chrono::steady_clock::now();
}
void DetachWindow() { window_initialized = false; }
/// Resumes ticking the application.
void Resume() {
// Checked to make sure we don't double schedule Choreographer.
@@ -122,7 +107,11 @@ struct Engine {
void Pause() { running_ = false; }
private:
bool running_;
android_app* app_;
bool window_initialized = false;
bool running_ = false;
Color color_ = Color::kRed;
std::chrono::time_point<std::chrono::steady_clock> last_update_;
void ScheduleNextTick() {
AChoreographer_postFrameCallback(AChoreographer_getInstance(), Tick, this);
@@ -150,9 +139,6 @@ struct Engine {
return;
}
// Input and sensor feedback is handled via their own callbacks.
// Choreographer ensures that those callbacks run before this callback does.
// Choreographer does not continuously schedule the callback. We have to re-
// register the callback each time we're ticked.
ScheduleNextTick();
@@ -161,202 +147,73 @@ struct Engine {
}
void Update() {
state.angle += .01f;
if (state.angle > 1) {
state.angle = 0;
auto now = std::chrono::steady_clock::now();
if (now - last_update_ > 1s) {
switch (color_) {
case Color::kRed:
color_ = Color::kGreen;
break;
case Color::kGreen:
color_ = Color::kBlue;
break;
case Color::kBlue:
color_ = Color::kRed;
break;
default:
fatal("unexpected color %08x", static_cast<uint32_t>(color_));
}
last_update_ = now;
}
}
void DrawFrame() {
if (display == nullptr) {
// No display.
void DrawFrame() const {
if (app_->window == nullptr) {
LOGE("Attempted to draw frame but there is no window attached");
return;
}
// Just fill the screen with a color.
glClearColor(((float)state.x) / width, state.angle,
((float)state.y) / height, 1);
glClear(GL_COLOR_BUFFER_BIT);
ANativeWindow_Buffer buffer;
if (ANativeWindow_lock(app_->window, &buffer, nullptr) < 0) {
LOGE("Unable to lock window buffer");
return;
}
eglSwapBuffers(display, surface);
if (!window_initialized) {
// If for some reason we were not able to initialize the window geometry,
// then we can't assume the buffer format. We could detect the buffer's
// format and adjust our buffer fill here to accommodate that, but's a bit
// beyond the scope of this sample.
return;
}
for (auto y = 0; y < buffer.height; y++) {
for (auto x = 0; x < buffer.width; x++) {
size_t pixel_idx = y * buffer.stride + x;
reinterpret_cast<uint32_t*>(buffer.bits)[pixel_idx] =
static_cast<uint32_t>(color_);
}
}
ANativeWindow_unlockAndPost(app_->window);
}
};
/**
* Initialize an EGL context for the current display.
*/
static int engine_init_display(Engine* engine) {
// initialize OpenGL ES and EGL
/*
* Here specify the attributes of the desired configuration.
* Below, we select an EGLConfig with at least 8 bits per color
* component compatible with on-screen windows
*/
const EGLint attribs[] = {EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE};
EGLint w, h, format;
EGLint numConfigs;
EGLConfig config = nullptr;
EGLSurface surface;
EGLContext context;
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
eglInitialize(display, nullptr, nullptr);
/* Here, the application chooses the configuration it desires.
* find the best match if possible, otherwise use the very first one
*/
eglChooseConfig(display, attribs, nullptr, 0, &numConfigs);
std::unique_ptr<EGLConfig[]> supportedConfigs(new EGLConfig[numConfigs]);
assert(supportedConfigs);
eglChooseConfig(display, attribs, supportedConfigs.get(), numConfigs,
&numConfigs);
assert(numConfigs);
auto i = 0;
for (; i < numConfigs; i++) {
auto& cfg = supportedConfigs[i];
EGLint r, g, b, d;
if (eglGetConfigAttrib(display, cfg, EGL_RED_SIZE, &r) &&
eglGetConfigAttrib(display, cfg, EGL_GREEN_SIZE, &g) &&
eglGetConfigAttrib(display, cfg, EGL_BLUE_SIZE, &b) &&
eglGetConfigAttrib(display, cfg, EGL_DEPTH_SIZE, &d) && r == 8 &&
g == 8 && b == 8 && d == 0) {
config = supportedConfigs[i];
break;
}
}
if (i == numConfigs) {
config = supportedConfigs[0];
}
if (config == nullptr) {
LOGW("Unable to initialize EGLConfig");
return -1;
}
/* EGL_NATIVE_VISUAL_ID is an attribute of the EGLConfig that is
* guaranteed to be accepted by ANativeWindow_setBuffersGeometry().
* As soon as we picked a EGLConfig, we can safely reconfigure the
* ANativeWindow buffers to match, using EGL_NATIVE_VISUAL_ID. */
eglGetConfigAttrib(display, config, EGL_NATIVE_VISUAL_ID, &format);
surface =
eglCreateWindowSurface(display, config, engine->app->window, nullptr);
/* A version of OpenGL has not been specified here. This will default to
* OpenGL 1.0. You will need to change this if you want to use the newer
* features of OpenGL like shaders. */
context = eglCreateContext(display, config, nullptr, nullptr);
if (eglMakeCurrent(display, surface, surface, context) == EGL_FALSE) {
LOGW("Unable to eglMakeCurrent");
return -1;
}
eglQuerySurface(display, surface, EGL_WIDTH, &w);
eglQuerySurface(display, surface, EGL_HEIGHT, &h);
engine->display = display;
engine->context = context;
engine->surface = surface;
engine->width = w;
engine->height = h;
engine->state.angle = 0;
// Check openGL on the system
auto opengl_info = {GL_VENDOR, GL_RENDERER, GL_VERSION, GL_EXTENSIONS};
for (auto name : opengl_info) {
auto info = glGetString(name);
LOGI("OpenGL Info: %s", info);
}
// Initialize GL state.
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
glEnable(GL_CULL_FACE);
glShadeModel(GL_SMOOTH);
glDisable(GL_DEPTH_TEST);
return 0;
}
/**
* Tear down the EGL context currently associated with the display.
*/
static void engine_term_display(Engine* engine) {
if (engine->display != EGL_NO_DISPLAY) {
eglMakeCurrent(engine->display, EGL_NO_SURFACE, EGL_NO_SURFACE,
EGL_NO_CONTEXT);
if (engine->context != EGL_NO_CONTEXT) {
eglDestroyContext(engine->display, engine->context);
}
if (engine->surface != EGL_NO_SURFACE) {
eglDestroySurface(engine->display, engine->surface);
}
eglTerminate(engine->display);
}
engine->Pause();
engine->display = EGL_NO_DISPLAY;
engine->context = EGL_NO_CONTEXT;
engine->surface = EGL_NO_SURFACE;
}
/**
* Process the next input event.
*/
static int32_t engine_handle_input(android_app* app, AInputEvent* event) {
auto* engine = (Engine*)app->userData;
if (AInputEvent_getType(event) == AINPUT_EVENT_TYPE_MOTION) {
engine->state.x = AMotionEvent_getX(event, 0);
engine->state.y = AMotionEvent_getY(event, 0);
return 1;
}
return 0;
}
/**
* Process the next main command.
*/
static void engine_handle_cmd(android_app* app, int32_t cmd) {
auto* engine = (Engine*)app->userData;
switch (cmd) {
case APP_CMD_SAVE_STATE:
// The system has asked us to save our current state. Do so.
engine->app->savedState = malloc(sizeof(SavedState));
*((SavedState*)engine->app->savedState) = engine->state;
engine->app->savedStateSize = sizeof(SavedState);
break;
case APP_CMD_INIT_WINDOW:
// The window is being shown, get it ready.
if (engine->app->window != nullptr) {
engine_init_display(engine);
}
engine->AttachWindow();
break;
case APP_CMD_TERM_WINDOW:
// The window is being hidden or closed, clean it up.
engine_term_display(engine);
engine->DetachWindow();
break;
case APP_CMD_GAINED_FOCUS:
// When our app gains focus, we start monitoring the accelerometer.
if (engine->accelerometerSensor != nullptr) {
ASensorEventQueue_enableSensor(engine->sensorEventQueue,
engine->accelerometerSensor);
// We'd like to get 60 events per second (in us).
ASensorEventQueue_setEventRate(engine->sensorEventQueue,
engine->accelerometerSensor,
(1000L / 60) * 1000);
}
engine->Resume();
break;
case APP_CMD_LOST_FOCUS:
// When our app loses focus, we stop monitoring the accelerometer.
// This is to avoid consuming battery while not being used.
if (engine->accelerometerSensor != nullptr) {
ASensorEventQueue_disableSensor(engine->sensorEventQueue,
engine->accelerometerSensor);
}
engine->Pause();
break;
default:
@@ -364,45 +221,16 @@ static void engine_handle_cmd(android_app* app, int32_t cmd) {
}
}
int OnSensorEvent(int /* fd */, int /* events */, void* data) {
CHECK_NOT_NULL(data);
Engine* engine = reinterpret_cast<Engine*>(data);
CHECK_NOT_NULL(engine->accelerometerSensor);
ASensorEvent event;
while (ASensorEventQueue_getEvents(engine->sensorEventQueue, &event, 1) > 0) {
LOGI("accelerometer: x=%f y=%f z=%f", event.acceleration.x,
event.acceleration.y, event.acceleration.z);
}
// From the docs:
//
// Implementations should return 1 to continue receiving callbacks, or 0 to
// have this file descriptor and callback unregistered from the looper.
return 1;
}
/**
* This is the main entry point of a native application that is using
* android_native_app_glue. It runs in its own thread, with its own
* event loop for receiving input events and doing other things.
*/
void android_main(android_app* state) {
Engine engine{};
Engine engine{state};
memset(&engine, 0, sizeof(engine));
state->userData = &engine;
state->onAppCmd = engine_handle_cmd;
state->onInputEvent = engine_handle_input;
engine.app = state;
// Prepare to monitor accelerometer
engine.CreateSensorListener(OnSensorEvent);
if (state->savedState != nullptr) {
// We are starting with a previous saved state; restore from it.
engine.state = *(SavedState*)state->savedState;
}
while (!state->destroyRequested) {
// Our input, sensor, and update/render logic is all driven by callbacks, so
@@ -418,7 +246,4 @@ void android_main(android_app* state) {
source->process(state, source);
}
}
engine_term_display(&engine);
}
// END_INCLUDE(all)

View File

@@ -1,10 +1,10 @@
cmake_minimum_required(VERSION 3.22.1)
project(NativeAudio LANGUAGES C)
project(NativeAudio LANGUAGES CXX)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall")
include(AppLibrary)
add_library(native-audio-jni SHARED
native-audio-jni.c
add_app_library(native-audio-jni SHARED
native-audio-jni.cpp
)
# Include libraries needed for native-audio-jni lib
@@ -13,3 +13,13 @@ target_link_libraries(native-audio-jni
log
OpenSLES
)
if(ANDROID_ABI STREQUAL riscv64)
# This sample uses OpenSLES, which was deprecated in API 26. Our
# minSdkVersion is 23, but we also build for riscv64, which isn't a
# supported ABI yet and so that configuration is built for the latest API
# level supported by the NDK.
#
# Longer term, this sample should migrate to Oboe.
target_compile_options(native-audio-jni PRIVATE -Wno-deprecated-declarations)
endif()

View File

@@ -195,9 +195,9 @@ short* createResampledBuf(uint32_t idx, uint32_t srcRate, unsigned* size) {
}
// this callback handler is called every time a buffer finishes playing
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void* context) {
void bqPlayerCallback([[maybe_unused]] SLAndroidSimpleBufferQueueItf bq,
void*) {
assert(bq == bqPlayerBufferQueue);
assert(NULL == context);
// for streaming playback, replace this test by logic to find and fill the
// next buffer
if (--nextCount > 0 && NULL != nextBuffer && 0 != nextSize) {
@@ -218,9 +218,9 @@ void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void* context) {
}
// this callback handler is called every time a buffer finishes recording
void bqRecorderCallback(SLAndroidSimpleBufferQueueItf bq, void* context) {
void bqRecorderCallback([[maybe_unused]] SLAndroidSimpleBufferQueueItf bq,
void*) {
assert(bq == recorderBufferQueue);
assert(NULL == context);
// for streaming recording, here we would call Enqueue to give recorder the
// next buffer to fill but instead, this is a one-time buffer so we stop
// recording
@@ -234,8 +234,8 @@ void bqRecorderCallback(SLAndroidSimpleBufferQueueItf bq, void* context) {
}
// create the engine and output mix objects
JNIEXPORT void JNICALL Java_com_example_nativeaudio_NativeAudio_createEngine(
JNIEnv* env, jclass clazz) {
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_createEngine(JNIEnv*, jclass) {
SLresult result;
// create engine
@@ -286,9 +286,9 @@ JNIEXPORT void JNICALL Java_com_example_nativeaudio_NativeAudio_createEngine(
}
// create buffer queue audio player
JNIEXPORT void JNICALL
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_createBufferQueueAudioPlayer(
JNIEnv* env, jclass clazz, jint sampleRate, jint bufSize) {
JNIEnv*, jclass, jint sampleRate, jint bufSize) {
SLresult result;
if (sampleRate >= 0 && bufSize >= 0) {
bqPlayerSampleRate = sampleRate * 1000;
@@ -395,14 +395,14 @@ Java_com_example_nativeaudio_NativeAudio_createBufferQueueAudioPlayer(
}
// create URI audio player
JNIEXPORT jboolean JNICALL
extern "C" JNIEXPORT jboolean JNICALL
Java_com_example_nativeaudio_NativeAudio_createUriAudioPlayer(JNIEnv* env,
jclass clazz,
jclass,
jstring uri) {
SLresult result;
// convert Java string to UTF-8
const char* utf8 = (*env)->GetStringUTFChars(env, uri, NULL);
const char* utf8 = env->GetStringUTFChars(uri, NULL);
assert(NULL != utf8);
// configure audio source
@@ -429,7 +429,7 @@ Java_com_example_nativeaudio_NativeAudio_createUriAudioPlayer(JNIEnv* env,
(void)result;
// release the Java string and UTF-8
(*env)->ReleaseStringUTFChars(env, uri, utf8);
env->ReleaseStringUTFChars(uri, utf8);
// realize the player
result = (*uriPlayerObject)->Realize(uriPlayerObject, SL_BOOLEAN_FALSE);
@@ -471,9 +471,9 @@ Java_com_example_nativeaudio_NativeAudio_createUriAudioPlayer(JNIEnv* env,
// set the playing state for the URI audio player
// to PLAYING (true) or PAUSED (false)
JNIEXPORT void JNICALL
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_setPlayingUriAudioPlayer(
JNIEnv* env, jclass clazz, jboolean isPlaying) {
JNIEnv*, jclass, jboolean isPlaying) {
SLresult result;
// make sure the URI audio player was created
@@ -488,9 +488,9 @@ Java_com_example_nativeaudio_NativeAudio_setPlayingUriAudioPlayer(
}
// set the whole file looping state for the URI audio player
JNIEXPORT void JNICALL
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_setLoopingUriAudioPlayer(
JNIEnv* env, jclass clazz, jboolean isLooping) {
JNIEnv*, jclass, jboolean isLooping) {
SLresult result;
// make sure the URI audio player was created
@@ -515,9 +515,9 @@ static SLMuteSoloItf getMuteSolo() {
return bqPlayerMuteSolo;
}
JNIEXPORT void JNICALL
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_setChannelMuteUriAudioPlayer(
JNIEnv* env, jclass clazz, jint chan, jboolean mute) {
JNIEnv*, jclass, jint chan, jboolean mute) {
SLresult result;
SLMuteSoloItf muteSoloItf = getMuteSolo();
if (NULL != muteSoloItf) {
@@ -527,9 +527,9 @@ Java_com_example_nativeaudio_NativeAudio_setChannelMuteUriAudioPlayer(
}
}
JNIEXPORT void JNICALL
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_setChannelSoloUriAudioPlayer(
JNIEnv* env, jclass clazz, jint chan, jboolean solo) {
JNIEnv*, jclass, jint chan, jboolean solo) {
SLresult result;
SLMuteSoloItf muteSoloItf = getMuteSolo();
if (NULL != muteSoloItf) {
@@ -539,9 +539,9 @@ Java_com_example_nativeaudio_NativeAudio_setChannelSoloUriAudioPlayer(
}
}
JNIEXPORT jint JNICALL
Java_com_example_nativeaudio_NativeAudio_getNumChannelsUriAudioPlayer(
JNIEnv* env, jclass clazz) {
extern "C" JNIEXPORT jint JNICALL
Java_com_example_nativeaudio_NativeAudio_getNumChannelsUriAudioPlayer(JNIEnv*,
jclass) {
SLuint8 numChannels;
SLresult result;
SLMuteSoloItf muteSoloItf = getMuteSolo();
@@ -570,9 +570,9 @@ static SLVolumeItf getVolume() {
return bqPlayerVolume;
}
JNIEXPORT void JNICALL
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_setVolumeUriAudioPlayer(
JNIEnv* env, jclass clazz, jint millibel) {
JNIEnv*, jclass, jint millibel) {
SLresult result;
SLVolumeItf volumeItf = getVolume();
if (NULL != volumeItf) {
@@ -582,9 +582,8 @@ Java_com_example_nativeaudio_NativeAudio_setVolumeUriAudioPlayer(
}
}
JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_setMuteUriAudioPlayer(JNIEnv* env,
jclass clazz,
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_setMuteUriAudioPlayer(JNIEnv*, jclass,
jboolean mute) {
SLresult result;
SLVolumeItf volumeItf = getVolume();
@@ -595,9 +594,9 @@ Java_com_example_nativeaudio_NativeAudio_setMuteUriAudioPlayer(JNIEnv* env,
}
}
JNIEXPORT void JNICALL
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_enableStereoPositionUriAudioPlayer(
JNIEnv* env, jclass clazz, jboolean enable) {
JNIEnv*, jclass, jboolean enable) {
SLresult result;
SLVolumeItf volumeItf = getVolume();
if (NULL != volumeItf) {
@@ -607,9 +606,9 @@ Java_com_example_nativeaudio_NativeAudio_enableStereoPositionUriAudioPlayer(
}
}
JNIEXPORT void JNICALL
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_setStereoPositionUriAudioPlayer(
JNIEnv* env, jclass clazz, jint permille) {
JNIEnv*, jclass, jint permille) {
SLresult result;
SLVolumeItf volumeItf = getVolume();
if (NULL != volumeItf) {
@@ -620,8 +619,8 @@ Java_com_example_nativeaudio_NativeAudio_setStereoPositionUriAudioPlayer(
}
// enable reverb on the buffer queue player
JNIEXPORT jboolean JNICALL
Java_com_example_nativeaudio_NativeAudio_enableReverb(JNIEnv* env, jclass clazz,
extern "C" JNIEXPORT jboolean JNICALL
Java_com_example_nativeaudio_NativeAudio_enableReverb(JNIEnv*, jclass,
jboolean enabled) {
SLresult result;
@@ -650,8 +649,9 @@ Java_com_example_nativeaudio_NativeAudio_enableReverb(JNIEnv* env, jclass clazz,
}
// select the desired clip and play count, and enqueue the first buffer if idle
JNIEXPORT jboolean JNICALL Java_com_example_nativeaudio_NativeAudio_selectClip(
JNIEnv* env, jclass clazz, jint which, jint count) {
extern "C" JNIEXPORT jboolean JNICALL
Java_com_example_nativeaudio_NativeAudio_selectClip(JNIEnv*, jclass, jint which,
jint count) {
if (pthread_mutex_trylock(&audioEngineLock)) {
// If we could not acquire audio engine lock, reject this request and client
// should re-try
@@ -722,13 +722,13 @@ JNIEXPORT jboolean JNICALL Java_com_example_nativeaudio_NativeAudio_selectClip(
}
// create asset audio player
JNIEXPORT jboolean JNICALL
extern "C" JNIEXPORT jboolean JNICALL
Java_com_example_nativeaudio_NativeAudio_createAssetAudioPlayer(
JNIEnv* env, jclass clazz, jobject assetManager, jstring filename) {
JNIEnv* env, jclass, jobject assetManager, jstring filename) {
SLresult result;
// convert Java string to UTF-8
const char* utf8 = (*env)->GetStringUTFChars(env, filename, NULL);
const char* utf8 = env->GetStringUTFChars(filename, NULL);
assert(NULL != utf8);
// use asset manager to open asset by filename
@@ -737,7 +737,7 @@ Java_com_example_nativeaudio_NativeAudio_createAssetAudioPlayer(
AAsset* asset = AAssetManager_open(mgr, utf8, AASSET_MODE_UNKNOWN);
// release the Java string and UTF-8
(*env)->ReleaseStringUTFChars(env, filename, utf8);
env->ReleaseStringUTFChars(filename, utf8);
// the asset might not be found
if (NULL == asset) {
@@ -811,9 +811,9 @@ Java_com_example_nativeaudio_NativeAudio_createAssetAudioPlayer(
}
// set the playing state for the asset audio player
JNIEXPORT void JNICALL
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_setPlayingAssetAudioPlayer(
JNIEnv* env, jclass clazz, jboolean isPlaying) {
JNIEnv*, jclass, jboolean isPlaying) {
SLresult result;
// make sure the asset audio player was created
@@ -830,9 +830,8 @@ Java_com_example_nativeaudio_NativeAudio_setPlayingAssetAudioPlayer(
// create audio recorder: recorder is not in fast path
// like to avoid excessive re-sampling while playing back from Hello &
// Android clip
JNIEXPORT jboolean JNICALL
Java_com_example_nativeaudio_NativeAudio_createAudioRecorder(JNIEnv* env,
jclass clazz) {
extern "C" JNIEXPORT jboolean JNICALL
Java_com_example_nativeaudio_NativeAudio_createAudioRecorder(JNIEnv*, jclass) {
SLresult result;
// configure audio source
@@ -892,8 +891,8 @@ Java_com_example_nativeaudio_NativeAudio_createAudioRecorder(JNIEnv* env,
}
// set the recording state for the audio recorder
JNIEXPORT void JNICALL Java_com_example_nativeaudio_NativeAudio_startRecording(
JNIEnv* env, jclass clazz) {
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_startRecording(JNIEnv*, jclass) {
SLresult result;
if (pthread_mutex_trylock(&audioEngineLock)) {
@@ -930,8 +929,8 @@ JNIEXPORT void JNICALL Java_com_example_nativeaudio_NativeAudio_startRecording(
}
// shut down the native audio system
JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_shutdown(JNIEnv* env, jclass clazz) {
extern "C" JNIEXPORT void JNICALL
Java_com_example_nativeaudio_NativeAudio_shutdown(JNIEnv*, jclass) {
// destroy buffer queue audio player object, and invalidate all associated
// interfaces
if (bqPlayerObject != NULL) {

View File

@@ -1,4 +1,5 @@
cmake_minimum_required(VERSION 3.22.1)
project(NativeCodec LANGUAGES CXX)
include(AppLibrary)

View File

@@ -1,73 +1,5 @@
# Native Media
# Sample removed
Native Media is an Android sample that uses OpenMAX AL to play a video.
Note: This sample requires an MPEG-2 Transport Stream file to be placed in
app/src/main/assets/clips/NativeMedia.ts and encoded as:
```
video: H.264 baseline profile
audio: AAC LC stereo
```
For demonstration purposes we have supplied such a .ts file, any actual stream
must be created according to the MPEG-2 specification.
This sample uses the new
[Android Studio CMake plugin](http://tools.android.com/tech-docs/external-c-builds)
with C++ support.
## Pre-requisites
- Android Studio 2.2+ with [NDK](https://developer.android.com/ndk/) bundle.
## Getting Started
1. [Download Android Studio](http://developer.android.com/sdk/index.html)
1. Launch Android Studio.
1. Open the sample directory.
1. Open *File/Project Structure...*
- Click *Download* or *Select NDK location*.
1. Click *Tools/Android/Sync Project with Gradle Files*.
1. Click *Run/Run 'app'*.
## Screenshots
![screenshot](screenshot.png)
## Known Issues
- Android-N preoview: native player path is not working, under debug now
## Support
If you've found an error in these samples, please
[file an issue](https://github.com/googlesamples/android-ndk/issues/new).
Patches are encouraged, and may be submitted by
[forking this project](https://github.com/googlesamples/android-ndk/fork) and
submitting a pull request through GitHub. Please see
[CONTRIBUTING.md](../CONTRIBUTING.md) for more details.
- [Stack Overflow](http://stackoverflow.com/questions/tagged/android-ndk)
- [Android Tools Feedbacks](http://tools.android.com/feedback)
## License
Copyright 2015 Google, Inc.
Licensed to the Apache Software Foundation (ASF) under one or more contributor
license agreements. See the NOTICE file distributed with this work for
additional information regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
This sample has been removed because we no longer recommend using OpenMAX AL in
new code. See the `native-codec` sample instead for an example of how to use the
Android Media APIs.

View File

@@ -1,21 +0,0 @@
plugins {
id "ndksamples.android.application"
}
android {
namespace 'com.example.nativemedia'
defaultConfig {
applicationId 'com.example.nativemedia'
}
externalNativeBuild {
cmake {
path 'src/main/cpp/CMakeLists.txt'
}
}
androidResources {
noCompress 'ts'
}
}

View File

@@ -1,19 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
<uses-feature android:glEsVersion="0x00020000" />
<!-- INTERNET is needed to use a URI-based media player, depending on the URI -->
<uses-permission android:name="android.permission.INTERNET"></uses-permission>
<application android:allowBackup="false"
android:fullBackupContent="false"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name">
<activity android:name=".NativeMedia"
android:label="@string/app_name"
android:exported="true">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>

View File

@@ -1,17 +0,0 @@
cmake_minimum_required(VERSION 3.22.1)
project(NativeMedia LANGUAGES C)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -UNDEBUG")
add_library(native-media-jni SHARED
android_fopen.c
native-media-jni.c
)
# Include libraries needed for native-media-jni lib
target_link_libraries(native-media-jni
android
log
OpenMAXAL
)

View File

@@ -1,54 +0,0 @@
/*
* Copyright (C) 2016 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// The original code is from https://github.com/netguy204/gambit-game-lib
#include "android_fopen.h"
#include <android/asset_manager.h>
#include <errno.h>
static int android_read(void* cookie, char* buf, int size) {
return AAsset_read((AAsset*)cookie, buf, size);
}
static int android_write(void* cookie, const char* buf, int size) {
return EACCES; // can't provide write access to the apk
}
static fpos_t android_seek(void* cookie, fpos_t offset, int whence) {
return AAsset_seek((AAsset*)cookie, offset, whence);
}
static int android_close(void* cookie) {
AAsset_close((AAsset*)cookie);
return 0;
}
// must be established by someone else...
static AAssetManager* android_asset_manager = NULL;
void android_fopen_set_asset_manager(AAssetManager* manager) {
android_asset_manager = manager;
}
FILE* android_fopen(const char* fname, const char* mode) {
if (mode[0] == 'w') return NULL;
AAsset* asset = AAssetManager_open(android_asset_manager, fname, 0);
if (!asset) return NULL;
return funopen(asset, android_read, android_write, android_seek,
android_close);
}

View File

@@ -1,39 +0,0 @@
/*
* Copyright (C) 2016 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef ANDROID_FOPEN_H
#define ANDROID_FOPEN_H
#define __USE_BSD
#include <android/asset_manager.h>
#include <stdio.h>
#ifdef __cplusplus
extern "C" {
#endif
/* hijack fopen and route it through the android asset system so that
we can pull things out of our packagesk APK */
void android_fopen_set_asset_manager(AAssetManager* manager);
FILE* android_fopen(const char* fname, const char* mode);
#define fopen(name, mode) android_fopen(name, mode)
#ifdef __cplusplus
}
#endif
#endif

View File

@@ -1,560 +0,0 @@
/*
* Copyright (C) 2011 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/* This is a JNI example where we use native methods to play video
* using OpenMAX AL. See the corresponding Java source file located at:
*
* src/com/example/nativemedia/NativeMedia/NativeMedia.java
*
* In this example we use assert() for "impossible" error conditions,
* and explicit handling and recovery for more likely error conditions.
*/
#include <assert.h>
#include <jni.h>
#include <pthread.h>
#include <stdio.h>
#include <string.h>
// for __android_log_print(ANDROID_LOG_INFO, "YourApp", "formatted message");
#include <android/log.h>
#define TAG "NativeMedia"
#define LOGV(...) __android_log_print(ANDROID_LOG_VERBOSE, TAG, __VA_ARGS__)
// for native media
#include <OMXAL/OpenMAXAL.h>
#include <OMXAL/OpenMAXAL_Android.h>
// for native window JNI
#include <android/asset_manager_jni.h>
#include <android/native_window_jni.h>
#include "android_fopen.h"
// engine interfaces
static XAObjectItf engineObject = NULL;
static XAEngineItf engineEngine = NULL;
// output mix interfaces
static XAObjectItf outputMixObject = NULL;
// streaming media player interfaces
static XAObjectItf playerObj = NULL;
static XAPlayItf playerPlayItf = NULL;
static XAAndroidBufferQueueItf playerBQItf = NULL;
static XAStreamInformationItf playerStreamInfoItf = NULL;
static XAVolumeItf playerVolItf = NULL;
// number of required interfaces for the MediaPlayer creation
#define NB_MAXAL_INTERFACES \
3 // XAAndroidBufferQueueItf, XAStreamInformationItf and XAPlayItf
// video sink for the player
static ANativeWindow* theNativeWindow;
// number of buffers in our buffer queue, an arbitrary number
#define NB_BUFFERS 8
// we're streaming MPEG-2 transport stream data, operate on transport stream
// block size
#define MPEG2_TS_PACKET_SIZE 188
// number of MPEG-2 transport stream blocks per buffer, an arbitrary number
#define PACKETS_PER_BUFFER 10
// determines how much memory we're dedicating to memory caching
#define BUFFER_SIZE (PACKETS_PER_BUFFER * MPEG2_TS_PACKET_SIZE)
// where we cache in memory the data to play
// note this memory is re-used by the buffer queue callback
static char dataCache[BUFFER_SIZE * NB_BUFFERS];
// handle of the file to play
static FILE* file;
static jobject android_java_asset_manager = NULL;
// has the app reached the end of the file
static jboolean reachedEof = JNI_FALSE;
// constant to identify a buffer context which is the end of the stream to
// decode
static const int kEosBufferCntxt =
1980; // a magic value we can compare against
// For mutual exclusion between callback thread and application thread(s).
// The mutex protects reachedEof, discontinuity,
// The condition is signalled when a discontinuity is acknowledged.
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
// whether a discontinuity is in progress
static jboolean discontinuity = JNI_FALSE;
static jboolean enqueueInitialBuffers(jboolean discontinuity);
// AndroidBufferQueueItf callback to supply MPEG-2 TS packets to the media
// player
static XAresult AndroidBufferQueueCallback(
XAAndroidBufferQueueItf caller, void* pCallbackContext, /* input */
void* pBufferContext, /* input */
void* pBufferData, /* input */
XAuint32 dataSize, /* input */
XAuint32 dataUsed, /* input */
const XAAndroidBufferItem* pItems, /* input */
XAuint32 itemsLength /* input */) {
XAresult res;
int ok;
// pCallbackContext was specified as NULL at RegisterCallback and is unused
// here
assert(NULL == pCallbackContext);
// note there is never any contention on this mutex unless a discontinuity
// request is active
ok = pthread_mutex_lock(&mutex);
assert(0 == ok);
// was a discontinuity requested?
if (discontinuity) {
// Note: can't rewind after EOS, which we send when reaching EOF
// (don't send EOS if you plan to play more content through the same player)
if (!reachedEof) {
// clear the buffer queue
res = (*playerBQItf)->Clear(playerBQItf);
assert(XA_RESULT_SUCCESS == res);
// rewind the data source so we are guaranteed to be at an appropriate
// point
rewind(file);
// Enqueue the initial buffers, with a discontinuity indicator on first
// buffer
(void)enqueueInitialBuffers(JNI_TRUE);
}
// acknowledge the discontinuity request
discontinuity = JNI_FALSE;
ok = pthread_cond_signal(&cond);
assert(0 == ok);
goto exit;
}
if ((pBufferData == NULL) && (pBufferContext != NULL)) {
const int processedCommand = *(int*)pBufferContext;
if (kEosBufferCntxt == processedCommand) {
LOGV("EOS was processed\n");
// our buffer with the EOS message has been consumed
assert(0 == dataSize);
goto exit;
}
}
// pBufferData is a pointer to a buffer that we previously Enqueued
assert((dataSize > 0) && ((dataSize % MPEG2_TS_PACKET_SIZE) == 0));
assert(dataCache <= (char*)pBufferData &&
(char*)pBufferData < &dataCache[BUFFER_SIZE * NB_BUFFERS]);
assert(0 == (((char*)pBufferData - dataCache) % BUFFER_SIZE));
// don't bother trying to read more data once we've hit EOF
if (reachedEof) {
goto exit;
}
// note we do call fread from multiple threads, but never concurrently
size_t bytesRead;
bytesRead = fread(pBufferData, 1, BUFFER_SIZE, file);
if (bytesRead > 0) {
if ((bytesRead % MPEG2_TS_PACKET_SIZE) != 0) {
LOGV("Dropping last packet because it is not whole");
}
size_t packetsRead = bytesRead / MPEG2_TS_PACKET_SIZE;
size_t bufferSize = packetsRead * MPEG2_TS_PACKET_SIZE;
res = (*caller)->Enqueue(caller, NULL /*pBufferContext*/,
pBufferData /*pData*/, bufferSize /*dataLength*/,
NULL /*pMsg*/, 0 /*msgLength*/);
assert(XA_RESULT_SUCCESS == res);
} else {
// EOF or I/O error, signal EOS
XAAndroidBufferItem msgEos[1];
msgEos[0].itemKey = XA_ANDROID_ITEMKEY_EOS;
msgEos[0].itemSize = 0;
// EOS message has no parameters, so the total size of the message is the
// size of the key
// plus the size if itemSize, both XAuint32
res = (*caller)->Enqueue(caller, (void*)&kEosBufferCntxt /*pBufferContext*/,
NULL /*pData*/, 0 /*dataLength*/, msgEos /*pMsg*/,
sizeof(XAuint32) * 2 /*msgLength*/);
assert(XA_RESULT_SUCCESS == res);
reachedEof = JNI_TRUE;
}
exit:
ok = pthread_mutex_unlock(&mutex);
assert(0 == ok);
return XA_RESULT_SUCCESS;
}
// callback invoked whenever there is new or changed stream information
static void StreamChangeCallback(XAStreamInformationItf caller,
XAuint32 eventId, XAuint32 streamIndex,
void* pEventData, void* pContext) {
LOGV("StreamChangeCallback called for stream %u", streamIndex);
// pContext was specified as NULL at RegisterStreamChangeCallback and is
// unused here
assert(NULL == pContext);
switch (eventId) {
case XA_STREAMCBEVENT_PROPERTYCHANGE: {
/** From spec 1.0.1:
"This event indicates that stream property change has occurred.
The streamIndex parameter identifies the stream with the property
change. The pEventData parameter for this event is not used and shall
be ignored."
*/
XAresult res;
XAuint32 domain;
res = (*caller)->QueryStreamType(caller, streamIndex, &domain);
assert(XA_RESULT_SUCCESS == res);
switch (domain) {
case XA_DOMAINTYPE_VIDEO: {
XAVideoStreamInformation videoInfo;
res = (*caller)->QueryStreamInformation(caller, streamIndex,
&videoInfo);
assert(XA_RESULT_SUCCESS == res);
LOGV(
"Found video size %u x %u, codec ID=%u, frameRate=%u, "
"bitRate=%u, duration=%u ms",
videoInfo.width, videoInfo.height, videoInfo.codecId,
videoInfo.frameRate, videoInfo.bitRate, videoInfo.duration);
} break;
default:
fprintf(stderr, "Unexpected domain %u\n", domain);
break;
}
} break;
default:
fprintf(stderr, "Unexpected stream event ID %u\n", eventId);
break;
}
}
// create the engine and output mix objects
void Java_com_example_nativemedia_NativeMedia_createEngine(JNIEnv* env,
jclass clazz) {
XAresult res;
// create engine
res = xaCreateEngine(&engineObject, 0, NULL, 0, NULL, NULL);
assert(XA_RESULT_SUCCESS == res);
// realize the engine
res = (*engineObject)->Realize(engineObject, XA_BOOLEAN_FALSE);
assert(XA_RESULT_SUCCESS == res);
// get the engine interface, which is needed in order to create other objects
res =
(*engineObject)->GetInterface(engineObject, XA_IID_ENGINE, &engineEngine);
assert(XA_RESULT_SUCCESS == res);
// create output mix
res = (*engineEngine)
->CreateOutputMix(engineEngine, &outputMixObject, 0, NULL, NULL);
assert(XA_RESULT_SUCCESS == res);
// realize the output mix
res = (*outputMixObject)->Realize(outputMixObject, XA_BOOLEAN_FALSE);
assert(XA_RESULT_SUCCESS == res);
}
// Enqueue the initial buffers, and optionally signal a discontinuity in the
// first buffer
static jboolean enqueueInitialBuffers(jboolean discontinuity) {
/* Fill our cache.
* We want to read whole packets (integral multiples of MPEG2_TS_PACKET_SIZE).
* fread returns units of "elements" not bytes, so we ask for 1-byte elements
* and then check that the number of elements is a multiple of the packet
* size.
*/
size_t bytesRead;
bytesRead = fread(dataCache, 1, BUFFER_SIZE * NB_BUFFERS, file);
if (bytesRead <= 0) {
// could be premature EOF or I/O error
return JNI_FALSE;
}
if ((bytesRead % MPEG2_TS_PACKET_SIZE) != 0) {
LOGV("Dropping last packet because it is not whole");
}
size_t packetsRead = bytesRead / MPEG2_TS_PACKET_SIZE;
LOGV("Initially queueing %zu packets", packetsRead);
/* Enqueue the content of our cache before starting to play,
we don't want to starve the player */
size_t i;
for (i = 0; i < NB_BUFFERS && packetsRead > 0; i++) {
// compute size of this buffer
size_t packetsThisBuffer = packetsRead;
if (packetsThisBuffer > PACKETS_PER_BUFFER) {
packetsThisBuffer = PACKETS_PER_BUFFER;
}
size_t bufferSize = packetsThisBuffer * MPEG2_TS_PACKET_SIZE;
XAresult res;
if (discontinuity) {
// signal discontinuity
XAAndroidBufferItem items[1];
items[0].itemKey = XA_ANDROID_ITEMKEY_DISCONTINUITY;
items[0].itemSize = 0;
// DISCONTINUITY message has no parameters,
// so the total size of the message is the size of the key
// plus the size if itemSize, both XAuint32
res = (*playerBQItf)
->Enqueue(playerBQItf, NULL /*pBufferContext*/,
dataCache + i * BUFFER_SIZE, bufferSize,
items /*pMsg*/, sizeof(XAuint32) * 2 /*msgLength*/);
discontinuity = JNI_FALSE;
} else {
res = (*playerBQItf)
->Enqueue(playerBQItf, NULL /*pBufferContext*/,
dataCache + i * BUFFER_SIZE, bufferSize, NULL, 0);
}
assert(XA_RESULT_SUCCESS == res);
packetsRead -= packetsThisBuffer;
}
return JNI_TRUE;
}
// create streaming media player
jboolean Java_com_example_nativemedia_NativeMedia_createStreamingMediaPlayer(
JNIEnv* env, jclass clazz, jobject assetMgr, jstring filename) {
XAresult res;
android_java_asset_manager = (*env)->NewGlobalRef(env, assetMgr);
android_fopen_set_asset_manager(
AAssetManager_fromJava(env, android_java_asset_manager));
// convert Java string to UTF-8
const char* utf8 = (*env)->GetStringUTFChars(env, filename, NULL);
assert(NULL != utf8);
// open the file to play
file = android_fopen(utf8, "rb");
if (file == NULL) {
return JNI_FALSE;
}
// configure data source
XADataLocator_AndroidBufferQueue loc_abq = {XA_DATALOCATOR_ANDROIDBUFFERQUEUE,
NB_BUFFERS};
XADataFormat_MIME format_mime = {XA_DATAFORMAT_MIME, XA_ANDROID_MIME_MP2TS,
XA_CONTAINERTYPE_MPEG_TS};
XADataSource dataSrc = {&loc_abq, &format_mime};
// configure audio sink
XADataLocator_OutputMix loc_outmix = {XA_DATALOCATOR_OUTPUTMIX,
outputMixObject};
XADataSink audioSnk = {&loc_outmix, NULL};
// configure image video sink
XADataLocator_NativeDisplay loc_nd = {
XA_DATALOCATOR_NATIVEDISPLAY, // locatorType
// the video sink must be an ANativeWindow created from a Surface or
// SurfaceTexture
(void*)theNativeWindow, // hWindow
// must be NULL
NULL // hDisplay
};
XADataSink imageVideoSink = {&loc_nd, NULL};
// declare interfaces to use
XAboolean required[NB_MAXAL_INTERFACES] = {XA_BOOLEAN_TRUE, XA_BOOLEAN_TRUE,
XA_BOOLEAN_TRUE};
XAInterfaceID iidArray[NB_MAXAL_INTERFACES] = {
XA_IID_PLAY, XA_IID_ANDROIDBUFFERQUEUESOURCE, XA_IID_STREAMINFORMATION};
// create media player
res =
(*engineEngine)
->CreateMediaPlayer(engineEngine, &playerObj, &dataSrc, NULL,
&audioSnk, &imageVideoSink, NULL, NULL,
NB_MAXAL_INTERFACES /*XAuint32 numInterfaces*/,
iidArray /*const XAInterfaceID *pInterfaceIds*/,
required /*const XAboolean *pInterfaceRequired*/);
assert(XA_RESULT_SUCCESS == res);
// release the Java string and UTF-8
(*env)->ReleaseStringUTFChars(env, filename, utf8);
// realize the player
res = (*playerObj)->Realize(playerObj, XA_BOOLEAN_FALSE);
assert(XA_RESULT_SUCCESS == res);
// get the play interface
res = (*playerObj)->GetInterface(playerObj, XA_IID_PLAY, &playerPlayItf);
assert(XA_RESULT_SUCCESS == res);
// get the stream information interface (for video size)
res = (*playerObj)
->GetInterface(playerObj, XA_IID_STREAMINFORMATION,
&playerStreamInfoItf);
assert(XA_RESULT_SUCCESS == res);
// get the volume interface
res = (*playerObj)->GetInterface(playerObj, XA_IID_VOLUME, &playerVolItf);
assert(XA_RESULT_SUCCESS == res);
// get the Android buffer queue interface
res = (*playerObj)
->GetInterface(playerObj, XA_IID_ANDROIDBUFFERQUEUESOURCE,
&playerBQItf);
assert(XA_RESULT_SUCCESS == res);
// specify which events we want to be notified of
res = (*playerBQItf)
->SetCallbackEventsMask(playerBQItf,
XA_ANDROIDBUFFERQUEUEEVENT_PROCESSED);
assert(XA_RESULT_SUCCESS == res);
// register the callback from which OpenMAX AL can retrieve the data to play
res = (*playerBQItf)
->RegisterCallback(playerBQItf, AndroidBufferQueueCallback, NULL);
assert(XA_RESULT_SUCCESS == res);
// we want to be notified of the video size once it's found, so we register a
// callback for that
res = (*playerStreamInfoItf)
->RegisterStreamChangeCallback(playerStreamInfoItf,
StreamChangeCallback, NULL);
assert(XA_RESULT_SUCCESS == res);
// enqueue the initial buffers
if (!enqueueInitialBuffers(JNI_FALSE)) {
return JNI_FALSE;
}
// prepare the player
res = (*playerPlayItf)->SetPlayState(playerPlayItf, XA_PLAYSTATE_PAUSED);
assert(XA_RESULT_SUCCESS == res);
// set the volume
res = (*playerVolItf)->SetVolumeLevel(playerVolItf, 0);
assert(XA_RESULT_SUCCESS == res);
// start the playback
res = (*playerPlayItf)->SetPlayState(playerPlayItf, XA_PLAYSTATE_PLAYING);
assert(XA_RESULT_SUCCESS == res);
return JNI_TRUE;
}
// set the playing state for the streaming media player
void Java_com_example_nativemedia_NativeMedia_setPlayingStreamingMediaPlayer(
JNIEnv* env, jclass clazz, jboolean isPlaying) {
XAresult res;
// make sure the streaming media player was created
if (NULL != playerPlayItf) {
// set the player's state
res = (*playerPlayItf)
->SetPlayState(playerPlayItf, isPlaying ? XA_PLAYSTATE_PLAYING
: XA_PLAYSTATE_PAUSED);
assert(XA_RESULT_SUCCESS == res);
}
}
// shut down the native media system
void Java_com_example_nativemedia_NativeMedia_shutdown(JNIEnv* env,
jclass clazz) {
// destroy streaming media player object, and invalidate all associated
// interfaces
if (playerObj != NULL) {
(*playerObj)->Destroy(playerObj);
playerObj = NULL;
playerPlayItf = NULL;
playerBQItf = NULL;
playerStreamInfoItf = NULL;
playerVolItf = NULL;
}
// destroy output mix object, and invalidate all associated interfaces
if (outputMixObject != NULL) {
(*outputMixObject)->Destroy(outputMixObject);
outputMixObject = NULL;
}
// destroy engine object, and invalidate all associated interfaces
if (engineObject != NULL) {
(*engineObject)->Destroy(engineObject);
engineObject = NULL;
engineEngine = NULL;
}
// close the file
if (file != NULL) {
fclose(file);
file = NULL;
}
if (android_java_asset_manager) {
(*env)->DeleteGlobalRef(env, android_java_asset_manager);
android_java_asset_manager = NULL;
}
// make sure we don't leak native windows
if (theNativeWindow != NULL) {
ANativeWindow_release(theNativeWindow);
theNativeWindow = NULL;
}
}
// set the surface
void Java_com_example_nativemedia_NativeMedia_setSurface(JNIEnv* env,
jclass clazz,
jobject surface) {
// obtain a native window from a Java surface
theNativeWindow = ANativeWindow_fromSurface(env, surface);
}
// rewind the streaming media player
void Java_com_example_nativemedia_NativeMedia_rewindStreamingMediaPlayer(
JNIEnv* env, jclass clazz) {
XAresult res;
XAuint32 state;
if (!playerPlayItf) {
return;
}
res = (*playerPlayItf)->GetPlayState(playerPlayItf, &state);
assert(XA_RESULT_SUCCESS == res);
if (state == XA_PLAYSTATE_PAUSED || state == XA_PLAYSTATE_STOPPED) {
discontinuity = JNI_TRUE;
return;
}
// make sure the streaming media player was created
if (NULL != playerBQItf && NULL != file) {
// first wait for buffers currently in queue to be drained
int ok;
ok = pthread_mutex_lock(&mutex);
assert(0 == ok);
discontinuity = JNI_TRUE;
// wait for discontinuity request to be observed by buffer queue callback
// Note: can't rewind after EOS, which we send when reaching EOF
// (don't send EOS if you plan to play more content through the same player)
while (discontinuity && !reachedEof) {
ok = pthread_cond_wait(&cond, &mutex);
assert(0 == ok);
}
ok = pthread_mutex_unlock(&mutex);
assert(0 == ok);
}
}

View File

@@ -1,336 +0,0 @@
/*
* Copyright (C) 2011 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.example.nativemedia;
import android.graphics.SurfaceTexture;
import android.util.Log;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.content.Context;
import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.opengl.Matrix;
import android.util.AttributeSet;
public class MyGLSurfaceView extends GLSurfaceView {
MyRenderer mRenderer;
public MyGLSurfaceView(Context context) {
this(context, null);
}
public MyGLSurfaceView(Context context, AttributeSet attributeSet) {
super(context, attributeSet);
init();
}
private void init() {
setEGLContextClientVersion(2);
mRenderer = new MyRenderer();
setRenderer(mRenderer);
}
@Override
public void onPause() {
super.onPause();
}
@Override
public void onResume() {
super.onResume();
}
public SurfaceTexture getSurfaceTexture() {
return mRenderer.getSurfaceTexture();
}
}
class MyRenderer implements GLSurfaceView.Renderer, SurfaceTexture.OnFrameAvailableListener {
public MyRenderer() {
mVertices = ByteBuffer.allocateDirect(mVerticesData.length
* FLOAT_SIZE_BYTES).order(ByteOrder.nativeOrder()).asFloatBuffer();
mVertices.put(mVerticesData).position(0);
Matrix.setIdentityM(mSTMatrix, 0);
Matrix.setIdentityM(mMMatrix, 0);
Matrix.rotateM(mMMatrix, 0, 20, 0, 1, 0);
}
public void onDrawFrame(GL10 glUnused) {
synchronized(this) {
if (updateSurface) {
mSurface.updateTexImage();
mSurface.getTransformMatrix(mSTMatrix);
updateSurface = false;
}
}
// Ignore the passed-in GL10 interface, and use the GLES20
// class's static methods instead.
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glUseProgram(mProgram);
checkGlError("glUseProgram");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, mTextureID);
mVertices.position(VERTICES_DATA_POS_OFFSET);
GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false,
VERTICES_DATA_STRIDE_BYTES, mVertices);
checkGlError("glVertexAttribPointer maPosition");
GLES20.glEnableVertexAttribArray(maPositionHandle);
checkGlError("glEnableVertexAttribArray maPositionHandle");
mVertices.position(VERTICES_DATA_UV_OFFSET);
GLES20.glVertexAttribPointer(maTextureHandle, 3, GLES20.GL_FLOAT, false,
VERTICES_DATA_STRIDE_BYTES, mVertices);
checkGlError("glVertexAttribPointer maTextureHandle");
GLES20.glEnableVertexAttribArray(maTextureHandle);
checkGlError("glEnableVertexAttribArray maTextureHandle");
Matrix.multiplyMM(mMVPMatrix, 0, mVMatrix, 0, mMMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(muMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(muSTMatrixHandle, 1, false, mSTMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
checkGlError("glDrawArrays");
}
public void onSurfaceChanged(GL10 glUnused, int width, int height) {
// Ignore the passed-in GL10 interface, and use the GLES20
// class's static methods instead.
GLES20.glViewport(0, 0, width, height);
mRatio = (float) width / height;
Matrix.frustumM(mProjMatrix, 0, -mRatio, mRatio, -1, 1, 3, 7);
}
public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
// Ignore the passed-in GL10 interface, and use the GLES20
// class's static methods instead.
/* Set up alpha blending and an Android background color */
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glClearColor(0.643f, 0.776f, 0.223f, 1.0f);
/* Set up shaders and handles to their variables */
mProgram = createProgram(mVertexShader, mFragmentShader);
if (mProgram == 0) {
return;
}
maPositionHandle = GLES20.glGetAttribLocation(mProgram, "aPosition");
checkGlError("glGetAttribLocation aPosition");
if (maPositionHandle == -1) {
throw new RuntimeException("Could not get attrib location for aPosition");
}
maTextureHandle = GLES20.glGetAttribLocation(mProgram, "aTextureCoord");
checkGlError("glGetAttribLocation aTextureCoord");
if (maTextureHandle == -1) {
throw new RuntimeException("Could not get attrib location for aTextureCoord");
}
muMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
checkGlError("glGetUniformLocation uMVPMatrix");
if (muMVPMatrixHandle == -1) {
throw new RuntimeException("Could not get attrib location for uMVPMatrix");
}
muSTMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uSTMatrix");
checkGlError("glGetUniformLocation uSTMatrix");
if (muMVPMatrixHandle == -1) {
throw new RuntimeException("Could not get attrib location for uSTMatrix");
}
checkGlError("glGetUniformLocation uCRatio");
if (muMVPMatrixHandle == -1) {
throw new RuntimeException("Could not get attrib location for uCRatio");
}
/*
* Create our texture. This has to be done each time the
* surface is created.
*/
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
mTextureID = textures[0];
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, mTextureID);
checkGlError("glBindTexture mTextureID");
// Can't do mipmapping with camera source
GLES20.glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_NEAREST);
GLES20.glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
// Clamp to edge is the only option
GLES20.glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
checkGlError("glTexParameteri mTextureID");
/*
* Create the SurfaceTexture that will feed this textureID, and pass it to the camera
*/
mSurface = new SurfaceTexture(mTextureID);
mSurface.setOnFrameAvailableListener(this);
Matrix.setLookAtM(mVMatrix, 0, 0, 0, 4f, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
synchronized(this) {
updateSurface = false;
}
}
synchronized public void onFrameAvailable(SurfaceTexture surface) {
/* For simplicity, SurfaceTexture calls here when it has new
* data available. Call may come in from some random thread,
* so let's be safe and use synchronize. No OpenGL calls can be done here.
*/
updateSurface = true;
//Log.v(TAG, "onFrameAvailable " + surface.getTimestamp());
}
private int loadShader(int shaderType, String source) {
int shader = GLES20.glCreateShader(shaderType);
if (shader != 0) {
GLES20.glShaderSource(shader, source);
GLES20.glCompileShader(shader);
int[] compiled = new int[1];
GLES20.glGetShaderiv(shader, GLES20.GL_COMPILE_STATUS, compiled, 0);
if (compiled[0] == 0) {
Log.e(TAG, "Could not compile shader " + shaderType + ":");
Log.e(TAG, GLES20.glGetShaderInfoLog(shader));
GLES20.glDeleteShader(shader);
shader = 0;
}
}
return shader;
}
private int createProgram(String vertexSource, String fragmentSource) {
int vertexShader = loadShader(GLES20.GL_VERTEX_SHADER, vertexSource);
if (vertexShader == 0) {
return 0;
}
int pixelShader = loadShader(GLES20.GL_FRAGMENT_SHADER, fragmentSource);
if (pixelShader == 0) {
return 0;
}
int program = GLES20.glCreateProgram();
if (program != 0) {
GLES20.glAttachShader(program, vertexShader);
checkGlError("glAttachShader");
GLES20.glAttachShader(program, pixelShader);
checkGlError("glAttachShader");
GLES20.glLinkProgram(program);
int[] linkStatus = new int[1];
GLES20.glGetProgramiv(program, GLES20.GL_LINK_STATUS, linkStatus, 0);
if (linkStatus[0] != GLES20.GL_TRUE) {
Log.e(TAG, "Could not link program: ");
Log.e(TAG, GLES20.glGetProgramInfoLog(program));
GLES20.glDeleteProgram(program);
program = 0;
}
}
return program;
}
private void checkGlError(String op) {
int error;
while ((error = GLES20.glGetError()) != GLES20.GL_NO_ERROR) {
Log.e(TAG, op + ": glError " + error);
throw new RuntimeException(op + ": glError " + error);
}
}
private static final int FLOAT_SIZE_BYTES = 4;
private static final int VERTICES_DATA_STRIDE_BYTES = 5 * FLOAT_SIZE_BYTES;
private static final int VERTICES_DATA_POS_OFFSET = 0;
private static final int VERTICES_DATA_UV_OFFSET = 3;
private final float[] mVerticesData = {
// X, Y, Z, U, V
-1.0f, -1.0f, 0, 0.f, 0.f,
1.0f, -1.0f, 0, 1.f, 0.f,
-1.0f, 1.0f, 0, 0.f, 1.f,
1.0f, 1.0f, 0, 1.f, 1.f,
};
private FloatBuffer mVertices;
private final String mVertexShader =
"uniform mat4 uMVPMatrix;\n" +
"uniform mat4 uSTMatrix;\n" +
"attribute vec4 aPosition;\n" +
"attribute vec4 aTextureCoord;\n" +
"varying vec2 vTextureCoord;\n" +
"void main() {\n" +
" gl_Position = uMVPMatrix * aPosition;\n" +
" vTextureCoord = (uSTMatrix * aTextureCoord).xy;\n" +
"}\n";
private final String mFragmentShader =
"#extension GL_OES_EGL_image_external : require\n" +
"precision mediump float;\n" +
"varying vec2 vTextureCoord;\n" +
"uniform samplerExternalOES sTexture;\n" +
"void main() {\n" +
" gl_FragColor = texture2D(sTexture, vTextureCoord);\n" +
"}\n";
private float[] mMVPMatrix = new float[16];
private float[] mProjMatrix = new float[16];
private float[] mMMatrix = new float[16];
private float[] mVMatrix = new float[16];
private float[] mSTMatrix = new float[16];
private int mProgram;
private int mTextureID;
private int muMVPMatrixHandle;
private int muSTMatrixHandle;
private int maPositionHandle;
private int maTextureHandle;
private float mRatio = 1.0f;
private SurfaceTexture mSurface;
private boolean updateSurface = false;
private static final String TAG = "MyRenderer";
// Magic key
private static final int GL_TEXTURE_EXTERNAL_OES = 0x8D65;
public SurfaceTexture getSurfaceTexture() {
return mSurface;
}
}

View File

@@ -1,421 +0,0 @@
/*
* Copyright (C) 2010 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.example.nativemedia;
import android.app.Activity;
import android.content.res.AssetFileDescriptor;
import android.content.res.AssetManager;
import android.graphics.SurfaceTexture;
import android.media.MediaPlayer;
import android.os.Bundle;
import android.util.Log;
import android.view.Surface;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.View;
import android.widget.AdapterView;
import android.widget.ArrayAdapter;
import android.widget.Button;
import android.widget.Spinner;
import java.io.FileDescriptor;
import java.io.IOException;
public class NativeMedia extends Activity {
static final String TAG = "NativeMedia";
String mSourceString = null;
String mSinkString = null;
// member variables for Java media player
MediaPlayer mMediaPlayer;
boolean mMediaPlayerIsPrepared = false;
SurfaceView mSurfaceView1;
SurfaceHolder mSurfaceHolder1;
// member variables for native media player
boolean mIsPlayingStreaming = false;
SurfaceView mSurfaceView2;
SurfaceHolder mSurfaceHolder2;
VideoSink mSelectedVideoSink;
VideoSink mJavaMediaPlayerVideoSink;
VideoSink mNativeMediaPlayerVideoSink;
SurfaceHolderVideoSink mSurfaceHolder1VideoSink, mSurfaceHolder2VideoSink;
GLViewVideoSink mGLView1VideoSink, mGLView2VideoSink;
AssetManager assetMgr;
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle icicle) {
super.onCreate(icicle);
setContentView(R.layout.main);
// get application's assetManager to avoid being garbage-collected.
assetMgr = getApplication().getAssets();
mGLView1 = (MyGLSurfaceView) findViewById(R.id.glsurfaceview1);
mGLView2 = (MyGLSurfaceView) findViewById(R.id.glsurfaceview2);
// initialize native media system
createEngine();
// set up the Surface 1 video sink
mSurfaceView1 = (SurfaceView) findViewById(R.id.surfaceview1);
mSurfaceHolder1 = mSurfaceView1.getHolder();
mSurfaceHolder1.addCallback(new SurfaceHolder.Callback() {
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
Log.v(TAG, "surfaceChanged format=" + format + ", width=" + width + ", height="
+ height);
}
public void surfaceCreated(SurfaceHolder holder) {
Log.v(TAG, "surfaceCreated");
setSurface(holder.getSurface());
}
public void surfaceDestroyed(SurfaceHolder holder) {
Log.v(TAG, "surfaceDestroyed");
}
});
// set up the Surface 2 video sink
mSurfaceView2 = (SurfaceView) findViewById(R.id.surfaceview2);
mSurfaceHolder2 = mSurfaceView2.getHolder();
mSurfaceHolder2.addCallback(new SurfaceHolder.Callback() {
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
Log.v(TAG, "surfaceChanged format=" + format + ", width=" + width + ", height="
+ height);
}
public void surfaceCreated(SurfaceHolder holder) {
Log.v(TAG, "surfaceCreated");
setSurface(holder.getSurface());
}
public void surfaceDestroyed(SurfaceHolder holder) {
Log.v(TAG, "surfaceDestroyed");
}
});
// create Java media player
mMediaPlayer = new MediaPlayer();
// set up Java media player listeners
mMediaPlayer.setOnPreparedListener(new MediaPlayer.OnPreparedListener() {
public void onPrepared(MediaPlayer mediaPlayer) {
int width = mediaPlayer.getVideoWidth();
int height = mediaPlayer.getVideoHeight();
Log.v(TAG, "onPrepared width=" + width + ", height=" + height);
if (width != 0 && height != 0 && mJavaMediaPlayerVideoSink != null) {
mJavaMediaPlayerVideoSink.setFixedSize(width, height);
}
mMediaPlayerIsPrepared = true;
mediaPlayer.start();
}
});
mMediaPlayer.setOnVideoSizeChangedListener(new MediaPlayer.OnVideoSizeChangedListener() {
public void onVideoSizeChanged(MediaPlayer mediaPlayer, int width, int height) {
Log.v(TAG, "onVideoSizeChanged width=" + width + ", height=" + height);
if (width != 0 && height != 0 && mJavaMediaPlayerVideoSink != null) {
mJavaMediaPlayerVideoSink.setFixedSize(width, height);
}
}
});
// initialize content source spinner
Spinner sourceSpinner = (Spinner) findViewById(R.id.source_spinner);
ArrayAdapter<CharSequence> sourceAdapter = ArrayAdapter.createFromResource(
this, R.array.source_array, android.R.layout.simple_spinner_item);
sourceAdapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
sourceSpinner.setAdapter(sourceAdapter);
sourceSpinner.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() {
public void onItemSelected(AdapterView<?> parent, View view, int pos, long id) {
mSourceString = parent.getItemAtPosition(pos).toString();
Log.v(TAG, "onItemSelected " + mSourceString);
}
public void onNothingSelected(AdapterView parent) {
Log.v(TAG, "onNothingSelected");
mSourceString = null;
}
});
// initialize video sink spinner
Spinner sinkSpinner = (Spinner) findViewById(R.id.sink_spinner);
ArrayAdapter<CharSequence> sinkAdapter = ArrayAdapter.createFromResource(
this, R.array.sink_array, android.R.layout.simple_spinner_item);
sinkAdapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
sinkSpinner.setAdapter(sinkAdapter);
sinkSpinner.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() {
public void onItemSelected(AdapterView<?> parent, View view, int pos, long id) {
mSinkString = parent.getItemAtPosition(pos).toString();
Log.v(TAG, "onItemSelected " + mSinkString);
if ("Surface 1".equals(mSinkString)) {
if (mSurfaceHolder1VideoSink == null) {
mSurfaceHolder1VideoSink = new SurfaceHolderVideoSink(mSurfaceHolder1);
}
mSelectedVideoSink = mSurfaceHolder1VideoSink;
} else if ("Surface 2".equals(mSinkString)) {
if (mSurfaceHolder2VideoSink == null) {
mSurfaceHolder2VideoSink = new SurfaceHolderVideoSink(mSurfaceHolder2);
}
mSelectedVideoSink = mSurfaceHolder2VideoSink;
} else if ("SurfaceTexture 1".equals(mSinkString)) {
if (mGLView1VideoSink == null) {
mGLView1VideoSink = new GLViewVideoSink(mGLView1);
}
mSelectedVideoSink = mGLView1VideoSink;
} else if ("SurfaceTexture 2".equals(mSinkString)) {
if (mGLView2VideoSink == null) {
mGLView2VideoSink = new GLViewVideoSink(mGLView2);
}
mSelectedVideoSink = mGLView2VideoSink;
}
}
public void onNothingSelected(AdapterView parent) {
Log.v(TAG, "onNothingSelected");
mSinkString = null;
mSelectedVideoSink = null;
}
});
// initialize button click handlers
// Java MediaPlayer start/pause
((Button) findViewById(R.id.start_java)).setOnClickListener(new View.OnClickListener() {
public void onClick(View view) {
if (mJavaMediaPlayerVideoSink == null) {
if (mSelectedVideoSink == null) {
return;
}
mSelectedVideoSink.useAsSinkForJava(mMediaPlayer);
mJavaMediaPlayerVideoSink = mSelectedVideoSink;
}
if (!mMediaPlayerIsPrepared) {
if (mSourceString != null) {
try {
AssetFileDescriptor clipFd = assetMgr.openFd(mSourceString);
mMediaPlayer.setDataSource(clipFd.getFileDescriptor(),
clipFd.getStartOffset(),
clipFd.getLength());
clipFd.close();
} catch (IOException e) {
Log.e(TAG, "IOException " + e);
}
mMediaPlayer.prepareAsync();
}
} else if (mMediaPlayer.isPlaying()) {
mMediaPlayer.pause();
} else {
mMediaPlayer.start();
}
}
});
// native MediaPlayer start/pause
((Button) findViewById(R.id.start_native)).setOnClickListener(new View.OnClickListener() {
boolean created = false;
public void onClick(View view) {
if (!created) {
if (mNativeMediaPlayerVideoSink == null) {
if (mSelectedVideoSink == null) {
return;
}
mSelectedVideoSink.useAsSinkForNative();
mNativeMediaPlayerVideoSink = mSelectedVideoSink;
}
if (mSourceString != null) {
created = createStreamingMediaPlayer(assetMgr, mSourceString);
}
}
if (created) {
mIsPlayingStreaming = !mIsPlayingStreaming;
setPlayingStreamingMediaPlayer(mIsPlayingStreaming);
}
}
});
// finish
((Button) findViewById(R.id.finish)).setOnClickListener(new View.OnClickListener() {
public void onClick(View view) {
finish();
}
});
// Java MediaPlayer rewind
((Button) findViewById(R.id.rewind_java)).setOnClickListener(new View.OnClickListener() {
public void onClick(View view) {
if (mMediaPlayerIsPrepared) {
mMediaPlayer.seekTo(0);
}
}
});
// native MediaPlayer rewind
((Button) findViewById(R.id.rewind_native)).setOnClickListener(new View.OnClickListener() {
public void onClick(View view) {
if (mNativeMediaPlayerVideoSink != null) {
rewindStreamingMediaPlayer();
}
}
});
}
/** Called when the activity is about to be paused. */
@Override
protected void onPause()
{
mIsPlayingStreaming = false;
setPlayingStreamingMediaPlayer(false);
mGLView1.onPause();
mGLView2.onPause();
super.onPause();
}
@Override
protected void onResume() {
super.onResume();
mGLView1.onResume();
mGLView2.onResume();
}
/** Called when the activity is about to be destroyed. */
@Override
protected void onDestroy()
{
shutdown();
super.onDestroy();
}
private MyGLSurfaceView mGLView1, mGLView2;
/** Native methods, implemented in jni folder */
public static native void createEngine();
public static native boolean createStreamingMediaPlayer(AssetManager assetManager,
String filename);
public static native void setPlayingStreamingMediaPlayer(boolean isPlaying);
public static native void shutdown();
public static native void setSurface(Surface surface);
public static native void rewindStreamingMediaPlayer();
/** Load jni .so on initialization */
static {
System.loadLibrary("native-media-jni");
}
// VideoSink abstracts out the difference between Surface and SurfaceTexture
// aka SurfaceHolder and GLSurfaceView
static abstract class VideoSink {
abstract void setFixedSize(int width, int height);
abstract void useAsSinkForJava(MediaPlayer mediaPlayer);
abstract void useAsSinkForNative();
}
static class SurfaceHolderVideoSink extends VideoSink {
private final SurfaceHolder mSurfaceHolder;
SurfaceHolderVideoSink(SurfaceHolder surfaceHolder) {
mSurfaceHolder = surfaceHolder;
}
void setFixedSize(int width, int height) {
mSurfaceHolder.setFixedSize(width, height);
}
void useAsSinkForJava(MediaPlayer mediaPlayer) {
// Use the newer MediaPlayer.setSurface(Surface) since API level 14
// instead of MediaPlayer.setDisplay(mSurfaceHolder) since API level 1,
// because setSurface also works with a Surface derived from a SurfaceTexture.
Surface s = mSurfaceHolder.getSurface();
mediaPlayer.setSurface(s);
s.release();
}
void useAsSinkForNative() {
Surface s = mSurfaceHolder.getSurface();
setSurface(s);
s.release();
}
}
static class GLViewVideoSink extends VideoSink {
private final MyGLSurfaceView mMyGLSurfaceView;
GLViewVideoSink(MyGLSurfaceView myGLSurfaceView) {
mMyGLSurfaceView = myGLSurfaceView;
}
void setFixedSize(int width, int height) {
}
void useAsSinkForJava(MediaPlayer mediaPlayer) {
SurfaceTexture st = mMyGLSurfaceView.getSurfaceTexture();
Surface s = new Surface(st);
mediaPlayer.setSurface(s);
s.release();
}
void useAsSinkForNative() {
SurfaceTexture st = mMyGLSurfaceView.getSurfaceTexture();
Surface s = new Surface(st);
setSurface(s);
s.release();
}
}
}

View File

@@ -1,133 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/hello"
/>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/source_select"
/>
<Spinner
android:id="@+id/source_spinner"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/source_prompt"
/>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/sink_select"
/>
<Spinner
android:id="@+id/sink_spinner"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/sink_prompt"
/>
<LinearLayout
android:orientation="horizontal"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
>
<Button
android:id="@+id/start_java"
android:text="@string/start_java"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
/>
<Button
android:id="@+id/start_native"
android:text="@string/start_native"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
/>
<Button
android:id="@+id/finish"
android:text="@string/finish"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
/>
</LinearLayout>
<LinearLayout
android:orientation="horizontal"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
>
<Button
android:id="@+id/rewind_java"
android:text="@string/rewind_java"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
/>
<Button
android:id="@+id/rewind_native"
android:text="@string/rewind_native"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
/>
</LinearLayout>
<LinearLayout
android:orientation="horizontal"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="S1"
/>
<SurfaceView
android:id="@+id/surfaceview1"
android:layout_width="320px"
android:layout_height="240px"
/>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="S2"
/>
<SurfaceView
android:id="@+id/surfaceview2"
android:layout_width="400px"
android:layout_height="224px"
/>
</LinearLayout>
<LinearLayout
android:orientation="horizontal"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="ST1"
/>
<com.example.nativemedia.MyGLSurfaceView
android:id="@+id/glsurfaceview1"
android:layout_width="320px"
android:layout_height="240px"
/>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="ST2"
/>
<com.example.nativemedia.MyGLSurfaceView
android:id="@+id/glsurfaceview2"
android:layout_width="320px"
android:layout_height="240px"
/>
</LinearLayout>
</LinearLayout>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.5 KiB

View File

@@ -1,29 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<resources>
<string name="hello">Hello, Android, using native media!</string>
<string name="app_name">NativeMedia</string>
<string name="start_java">Start/Pause\nJava MediaPlayer</string>
<string name="start_native">Start/Pause\nnative MediaPlayer</string>
<string name="finish">Finish</string>
<string name="rewind_java">Rewind\nJava MediaPlayer</string>
<string name="rewind_native">Rewind\nnative MediaPlayer</string>
<string name="source_select">Please select the media source</string>
<string name="source_prompt">Media source</string>
<string-array name="source_array">
<item>clips/NativeMedia.ts</item>
</string-array>
<string name="sink_select">Please select the video sink</string>
<string name="sink_prompt">Video sink</string>
<string-array name="sink_array">
<item>Surface 1</item>
<item>Surface 2</item>
<item>SurfaceTexture 1</item>
<item>SurfaceTexture 2</item>
</string-array>
<string name="error_creation">Error creating player:
check media file is copied to android device</string>
</resources>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

View File

@@ -1,60 +0,0 @@
# Native Plasma
Native Plasma is an Android sample that renders a plasma effect in a Bitmap from
C code using
[Native Activity](http://developer.android.com/reference/android/app/NativeActivity.html).
This sample uses the new
[Android Studio CMake plugin](http://tools.android.com/tech-docs/external-c-builds)
with C++ support.
## Pre-requisites
- Android Studio 2.2+ with [NDK](https://developer.android.com/ndk/) bundle.
## Getting Started
1. [Download Android Studio](http://developer.android.com/sdk/index.html)
1. Launch Android Studio.
1. Open the sample directory.
1. Open *File/Project Structure...*
- Click *Download* or *Select NDK location*.
1. Click *Tools/Android/Sync Project with Gradle Files*.
1. Click *Run/Run 'app'*.
## Screenshots
![screenshot](screenshot.png)
## Support
If you've found an error in these samples, please
[file an issue](https://github.com/googlesamples/android-ndk/issues/new).
Patches are encouraged, and may be submitted by
[forking this project](https://github.com/googlesamples/android-ndk/fork) and
submitting a pull request through GitHub. Please see
[CONTRIBUTING.md](../CONTRIBUTING.md) for more details.
- [Stack Overflow](http://stackoverflow.com/questions/tagged/android-ndk)
- [Android Tools Feedbacks](http://tools.android.com/feedback)
## License
Copyright 2015 Google, Inc.
Licensed to the Apache Software Foundation (ASF) under one or more contributor
license agreements. See the NOTICE file distributed with this work for
additional information regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

View File

@@ -1,17 +0,0 @@
plugins {
id "ndksamples.android.application"
}
android {
namespace 'com.example.native_plasma'
defaultConfig {
applicationId 'com.example.native_plasma'
}
externalNativeBuild {
cmake {
path 'src/main/cpp/CMakeLists.txt'
}
}
}

View File

@@ -1,22 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
android:versionCode="1"
android:versionName="1.0">
<application
android:allowBackup="false"
android:fullBackupContent="false"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:hasCode="false">
<activity android:name="android.app.NativeActivity"
android:label="@string/app_name"
android:exported="true">
<meta-data android:name="android.app.lib_name"
android:value="native-plasma" />
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>

View File

@@ -1,41 +0,0 @@
#
# Copyright (C) The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
cmake_minimum_required(VERSION 3.22.1)
project(NativePlasma LANGUAGES C)
# build native_app_glue as a static lib
add_library(native_app_glue STATIC
${ANDROID_NDK}/sources/android/native_app_glue/android_native_app_glue.c)
# now build app's shared lib
add_library(native-plasma SHARED
plasma.c)
# Export ANativeActivity_onCreate(),
# Refer to: https://github.com/android-ndk/ndk/issues/381.
set(CMAKE_SHARED_LINKER_FLAGS
"${CMAKE_SHARED_LINKER_FLAGS} -u ANativeActivity_onCreate")
target_include_directories(native-plasma PRIVATE
${ANDROID_NDK}/sources/android/native_app_glue)
# add lib dependencies
target_link_libraries(native-plasma
android
native_app_glue
log
m)

View File

@@ -1,480 +0,0 @@
/*
* Copyright (C) 2010 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include <android/log.h>
#include <android_native_app_glue.h>
#include <errno.h>
#include <jni.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>
#include <time.h>
#define LOG_TAG "libplasma"
#define LOGI(...) __android_log_print(ANDROID_LOG_INFO, LOG_TAG, __VA_ARGS__)
#define LOGW(...) __android_log_print(ANDROID_LOG_WARN, LOG_TAG, __VA_ARGS__)
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS__)
/* Set to 1 to enable debug log traces. */
#define DEBUG 0
/* Set to 1 to optimize memory stores when generating plasma. */
#define OPTIMIZE_WRITES 1
/* Return current time in milliseconds */
static double now_ms(void) {
struct timeval tv;
gettimeofday(&tv, NULL);
return tv.tv_sec * 1000. + tv.tv_usec / 1000.;
}
/* We're going to perform computations for every pixel of the target
* bitmap. floating-point operations are very slow on ARMv5, and not
* too bad on ARMv7 with the exception of trigonometric functions.
*
* For better performance on all platforms, we're going to use fixed-point
* arithmetic and all kinds of tricks
*/
typedef int32_t Fixed;
#define FIXED_BITS 16
#define FIXED_ONE (1 << FIXED_BITS)
#define FIXED_AVERAGE(x, y) (((x) + (y)) >> 1)
#define FIXED_FROM_INT(x) ((x) << FIXED_BITS)
#define FIXED_TO_INT(x) ((x) >> FIXED_BITS)
#define FIXED_FROM_FLOAT(x) ((Fixed)((x) * FIXED_ONE))
#define FIXED_TO_FLOAT(x) ((x) / (1. * FIXED_ONE))
#define FIXED_MUL(x, y) (((int64_t)(x) * (y)) >> FIXED_BITS)
#define FIXED_DIV(x, y) (((int64_t)(x) * FIXED_ONE) / (y))
#define FIXED_DIV2(x) ((x) >> 1)
#define FIXED_AVERAGE(x, y) (((x) + (y)) >> 1)
#define FIXED_FRAC(x) ((x) & ((1 << FIXED_BITS) - 1))
#define FIXED_TRUNC(x) ((x) & ~((1 << FIXED_BITS) - 1))
#define FIXED_FROM_INT_FLOAT(x, f) (Fixed)((x) * (FIXED_ONE * (f)))
typedef int32_t Angle;
#define ANGLE_BITS 9
#if ANGLE_BITS < 8
#error ANGLE_BITS must be at least 8
#endif
#define ANGLE_2PI (1 << ANGLE_BITS)
#define ANGLE_PI (1 << (ANGLE_BITS - 1))
#define ANGLE_PI2 (1 << (ANGLE_BITS - 2))
#define ANGLE_PI4 (1 << (ANGLE_BITS - 3))
#define ANGLE_FROM_FLOAT(x) (Angle)((x) * ANGLE_PI / M_PI)
#define ANGLE_TO_FLOAT(x) ((x) * M_PI / ANGLE_PI)
#if ANGLE_BITS <= FIXED_BITS
#define ANGLE_FROM_FIXED(x) (Angle)((x) >> (FIXED_BITS - ANGLE_BITS))
#define ANGLE_TO_FIXED(x) (Fixed)((x) << (FIXED_BITS - ANGLE_BITS))
#else
#define ANGLE_FROM_FIXED(x) (Angle)((x) << (ANGLE_BITS - FIXED_BITS))
#define ANGLE_TO_FIXED(x) (Fixed)((x) >> (ANGLE_BITS - FIXED_BITS))
#endif
static Fixed angle_sin_tab[ANGLE_2PI + 1];
static void init_angles(void) {
int nn;
for (nn = 0; nn < ANGLE_2PI + 1; nn++) {
double radians = nn * M_PI / ANGLE_PI;
angle_sin_tab[nn] = FIXED_FROM_FLOAT(sin(radians));
}
}
static __inline__ Fixed angle_sin(Angle a) {
return angle_sin_tab[(uint32_t)a & (ANGLE_2PI - 1)];
}
static __inline__ Fixed angle_cos(Angle a) { return angle_sin(a + ANGLE_PI2); }
static __inline__ Fixed fixed_sin(Fixed f) {
return angle_sin(ANGLE_FROM_FIXED(f));
}
static __inline__ Fixed fixed_cos(Fixed f) {
return angle_cos(ANGLE_FROM_FIXED(f));
}
/* Color palette used for rendering the plasma */
#define PALETTE_BITS 8
#define PALETTE_SIZE (1 << PALETTE_BITS)
#if PALETTE_BITS > FIXED_BITS
#error PALETTE_BITS must be smaller than FIXED_BITS
#endif
static uint16_t palette[PALETTE_SIZE];
static uint16_t make565(int red, int green, int blue) {
return (uint16_t)(((red << 8) & 0xf800) | ((green << 3) & 0x07e0) |
((blue >> 3) & 0x001f));
}
static void init_palette(void) {
int nn, mm = 0;
/* fun with colors */
for (nn = 0; nn < PALETTE_SIZE / 4; nn++) {
int jj = (nn - mm) * 4 * 255 / PALETTE_SIZE;
palette[nn] = make565(255, jj, 255 - jj);
}
for (mm = nn; nn < PALETTE_SIZE / 2; nn++) {
int jj = (nn - mm) * 4 * 255 / PALETTE_SIZE;
palette[nn] = make565(255 - jj, 255, jj);
}
for (mm = nn; nn < PALETTE_SIZE * 3 / 4; nn++) {
int jj = (nn - mm) * 4 * 255 / PALETTE_SIZE;
palette[nn] = make565(0, 255 - jj, 255);
}
for (mm = nn; nn < PALETTE_SIZE; nn++) {
int jj = (nn - mm) * 4 * 255 / PALETTE_SIZE;
palette[nn] = make565(jj, 0, 255);
}
}
static __inline__ uint16_t palette_from_fixed(Fixed x) {
if (x < 0) x = -x;
if (x >= FIXED_ONE) x = FIXED_ONE - 1;
int idx = FIXED_FRAC(x) >> (FIXED_BITS - PALETTE_BITS);
return palette[idx & (PALETTE_SIZE - 1)];
}
/* Angles expressed as fixed point radians */
static void init_tables(void) {
init_palette();
init_angles();
}
static void fill_plasma(ANativeWindow_Buffer* buffer, double t) {
Fixed yt1 = FIXED_FROM_FLOAT(t / 1230.);
Fixed yt2 = yt1;
Fixed xt10 = FIXED_FROM_FLOAT(t / 3000.);
Fixed xt20 = xt10;
#define YT1_INCR FIXED_FROM_FLOAT(1 / 100.)
#define YT2_INCR FIXED_FROM_FLOAT(1 / 163.)
void* pixels = buffer->bits;
// LOGI("width=%d height=%d stride=%d format=%d", buffer->width,
// buffer->height,
// buffer->stride, buffer->format);
int yy;
for (yy = 0; yy < buffer->height; yy++) {
uint16_t* line = (uint16_t*)pixels;
Fixed base = fixed_sin(yt1) + fixed_sin(yt2);
Fixed xt1 = xt10;
Fixed xt2 = xt20;
yt1 += YT1_INCR;
yt2 += YT2_INCR;
#define XT1_INCR FIXED_FROM_FLOAT(1 / 173.)
#define XT2_INCR FIXED_FROM_FLOAT(1 / 242.)
#if OPTIMIZE_WRITES
/* optimize memory writes by generating one aligned 32-bit store
* for every pair of pixels.
*/
uint16_t* line_end = line + buffer->width;
if (line < line_end) {
if (((uint32_t)(uintptr_t)line & 3) != 0) {
Fixed ii = base + fixed_sin(xt1) + fixed_sin(xt2);
xt1 += XT1_INCR;
xt2 += XT2_INCR;
line[0] = palette_from_fixed(ii >> 2);
line++;
}
while (line + 2 <= line_end) {
Fixed i1 = base + fixed_sin(xt1) + fixed_sin(xt2);
xt1 += XT1_INCR;
xt2 += XT2_INCR;
Fixed i2 = base + fixed_sin(xt1) + fixed_sin(xt2);
xt1 += XT1_INCR;
xt2 += XT2_INCR;
uint32_t pixel = ((uint32_t)palette_from_fixed(i1 >> 2) << 16) |
(uint32_t)palette_from_fixed(i2 >> 2);
((uint32_t*)line)[0] = pixel;
line += 2;
}
if (line < line_end) {
Fixed ii = base + fixed_sin(xt1) + fixed_sin(xt2);
line[0] = palette_from_fixed(ii >> 2);
line++;
}
}
#else /* !OPTIMIZE_WRITES */
int xx;
for (xx = 0; xx < buffer->width; xx++) {
Fixed ii = base + fixed_sin(xt1) + fixed_sin(xt2);
xt1 += XT1_INCR;
xt2 += XT2_INCR;
line[xx] = palette_from_fixed(ii / 4);
}
#endif /* !OPTIMIZE_WRITES */
// go to next line
pixels = (uint16_t*)pixels + buffer->stride;
}
}
/* simple stats management */
typedef struct {
double renderTime;
double frameTime;
} FrameStats;
#define MAX_FRAME_STATS 200
#define MAX_PERIOD_MS 1500
typedef struct {
double firstTime;
double lastTime;
double frameTime;
int firstFrame;
int numFrames;
FrameStats frames[MAX_FRAME_STATS];
} Stats;
static void stats_init(Stats* s) {
s->lastTime = now_ms();
s->firstTime = 0.;
s->firstFrame = 0;
s->numFrames = 0;
}
static void stats_startFrame(Stats* s) { s->frameTime = now_ms(); }
static void stats_endFrame(Stats* s) {
double now = now_ms();
double renderTime = now - s->frameTime;
double frameTime = now - s->lastTime;
int nn;
if (now - s->firstTime >= MAX_PERIOD_MS) {
if (s->numFrames > 0) {
double minRender, maxRender, avgRender;
double minFrame, maxFrame, avgFrame;
int count;
nn = s->firstFrame;
minRender = maxRender = avgRender = s->frames[nn].renderTime;
minFrame = maxFrame = avgFrame = s->frames[nn].frameTime;
for (count = s->numFrames; count > 0; count--) {
nn += 1;
if (nn >= MAX_FRAME_STATS) nn -= MAX_FRAME_STATS;
double render = s->frames[nn].renderTime;
if (render < minRender) minRender = render;
if (render > maxRender) maxRender = render;
double frame = s->frames[nn].frameTime;
if (frame < minFrame) minFrame = frame;
if (frame > maxFrame) maxFrame = frame;
avgRender += render;
avgFrame += frame;
}
avgRender /= s->numFrames;
avgFrame /= s->numFrames;
LOGI(
"frame/s (avg,min,max) = (%.1f,%.1f,%.1f) "
"render time ms (avg,min,max) = (%.1f,%.1f,%.1f)\n",
1000. / avgFrame, 1000. / maxFrame, 1000. / minFrame, avgRender,
minRender, maxRender);
}
s->numFrames = 0;
s->firstFrame = 0;
s->firstTime = now;
}
nn = s->firstFrame + s->numFrames;
if (nn >= MAX_FRAME_STATS) nn -= MAX_FRAME_STATS;
s->frames[nn].renderTime = renderTime;
s->frames[nn].frameTime = frameTime;
if (s->numFrames < MAX_FRAME_STATS) {
s->numFrames += 1;
} else {
s->firstFrame += 1;
if (s->firstFrame >= MAX_FRAME_STATS) s->firstFrame -= MAX_FRAME_STATS;
}
s->lastTime = now;
}
// ----------------------------------------------------------------------
struct engine {
struct android_app* app;
Stats stats;
int animating;
};
static int64_t start_ms;
static void engine_draw_frame(struct engine* engine) {
if (engine->app->window == NULL) {
// No window.
return;
}
ANativeWindow_Buffer buffer;
if (ANativeWindow_lock(engine->app->window, &buffer, NULL) < 0) {
LOGW("Unable to lock window buffer");
return;
}
stats_startFrame(&engine->stats);
struct timespec now;
clock_gettime(CLOCK_MONOTONIC, &now);
int64_t time_ms =
(((int64_t)now.tv_sec) * 1000000000LL + now.tv_nsec) / 1000000;
time_ms -= start_ms;
/* Now fill the values with a nice little plasma */
fill_plasma(&buffer, time_ms);
ANativeWindow_unlockAndPost(engine->app->window);
stats_endFrame(&engine->stats);
}
static void engine_term_display(struct engine* engine) {
engine->animating = 0;
}
static int32_t engine_handle_input(struct android_app* app,
AInputEvent* event) {
struct engine* engine = (struct engine*)app->userData;
if (AInputEvent_getType(event) == AINPUT_EVENT_TYPE_MOTION) {
engine->animating = 1;
return 1;
} else if (AInputEvent_getType(event) == AINPUT_EVENT_TYPE_KEY) {
LOGI("Key event: action=%d keyCode=%d metaState=0x%x",
AKeyEvent_getAction(event), AKeyEvent_getKeyCode(event),
AKeyEvent_getMetaState(event));
}
return 0;
}
static void engine_handle_cmd(struct android_app* app, int32_t cmd) {
static int32_t format = WINDOW_FORMAT_RGB_565;
struct engine* engine = (struct engine*)app->userData;
switch (cmd) {
case APP_CMD_INIT_WINDOW:
if (engine->app->window != NULL) {
// fill_plasma() assumes 565 format, get it here
format = ANativeWindow_getFormat(app->window);
ANativeWindow_setBuffersGeometry(
app->window, ANativeWindow_getWidth(app->window),
ANativeWindow_getHeight(app->window), WINDOW_FORMAT_RGB_565);
engine_draw_frame(engine);
}
break;
case APP_CMD_TERM_WINDOW:
engine_term_display(engine);
ANativeWindow_setBuffersGeometry(
app->window, ANativeWindow_getWidth(app->window),
ANativeWindow_getHeight(app->window), format);
break;
case APP_CMD_LOST_FOCUS:
engine->animating = 0;
engine_draw_frame(engine);
break;
}
}
void android_main(struct android_app* state) {
static int init;
struct engine engine;
memset(&engine, 0, sizeof(engine));
state->userData = &engine;
state->onAppCmd = engine_handle_cmd;
state->onInputEvent = engine_handle_input;
engine.app = state;
if (!init) {
init_tables();
init = 1;
}
struct timespec now;
clock_gettime(CLOCK_MONOTONIC, &now);
start_ms = (((int64_t)now.tv_sec) * 1000000000LL + now.tv_nsec) / 1000000;
stats_init(&engine.stats);
// loop waiting for stuff to do.
while (!state->destroyRequested) {
// If not animating, we will block forever waiting for events.
// If animating, we loop until all events are read, then continue
// to draw the next frame of animation.
struct android_poll_source* source = NULL;
int result = ALooper_pollOnce(engine.animating ? 0 : -1, NULL, NULL,
(void**)&source);
if (result == ALOOPER_POLL_ERROR) {
LOGE("ALooper_pollOnce returned an error");
abort();
}
// Process this event.
if (source != NULL) {
source->process(state, source);
}
if (engine.animating) {
engine_draw_frame(&engine);
}
}
LOGI("Engine thread destroy requested!");
engine_term_display(&engine);
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.5 KiB

View File

@@ -1,4 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<resources>
<string name="app_name">Native Plasma</string>
</resources>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 432 KiB

View File

@@ -1,9 +0,0 @@
*.iml
.gradle
/local.properties
/.idea
.DS_Store
/build
/captures
.externalNativeBuild
.cxx

View File

@@ -1,59 +1,7 @@
# Android Neural Networks API Sample
# Sample removed
The samples demonstrate how to use Android NNAPI exported through Android NDK:
This sample has been removed because the Android Neural Networks API has been
deprecated. Apps should instead use TensorFlow Lite. For for information, see
the [NNAPI Migration Guide].
- basic: showcase the main NNAPI concept from Android 8
- sequence: showcase the advanced features added in Android 11
Check each module's README.md for additional descriptions and additional
requirements.
## Pre-requisites
- Android Studio 4.0+.
- NDK r16+.
- Android API 27+.
## Getting Started
1. [Download Android Studio](http://developer.android.com/sdk/index.html)
1. Launch Android Studio.
1. Open the sample directory.
1. Click *Tools/Android/Sync Project with Gradle Files*.
1. Click *Run/Run 'app'*.
## Screenshots
<img src="basic/screenshot.png" width="360">
<img src="sequence/screenshot.png" width="360">
## Support
If you've found an error in these samples, please
[file an issue](https://github.com/android/ndk-samples/issues/new).
Patches are encouraged, and may be submitted by
[forking this project](https://github.com/android/ndk-samples/fork) and
submitting a pull request through GitHub. Please see
[CONTRIBUTING.md](../CONTRIBUTING.md) for more details.
- [Stack Overflow](http://stackoverflow.com/questions/tagged/android-ndk)
- [Android Tools Feedbacks](http://tools.android.com/feedback)
## License
Copyright 2020 Google LLC
Licensed to the Apache Software Foundation (ASF) under one or more contributor
license agreements. See the NOTICE file distributed with this work for
additional information regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
[NNAPI Migration Guide]: https://developer.android.com/ndk/guides/neuralnetworks/migration-guide

View File

@@ -1 +0,0 @@
/build

View File

@@ -1,44 +0,0 @@
# Android Neural Networks API Sample: Basic
Android Neural Networks API (NN API) Sample demonstrates basic usages of NN API
with a simple model that consists of three operations: two additions and a
multiplication.
The sums created by the additions are the inputs to the multiplication. In
essence, we are creating a graph that computes: (tensor0 + tensor1) * (tensor2 +
tensor3).
```java
tensor0 ---+
+--- ADD ---> intermediateOutput0 ---+
tensor1 ---+ |
+--- MUL---> output
tensor2 ---+ |
+--- ADD ---> intermediateOutput1 ---+
tensor3 ---+
```
Two of the four tensors, tensor0 and tensor2 being added are constants, defined
in the model. They represent the weights that would have been learned during a
training process, loaded from model_data.bin.
The other two tensors, tensor1 and tensor3 will be inputs to the model. Their
values will be provided when we execute the model. These values can change from
execution to execution.
Besides the two input tensors, an optional fused activation function can also be
defined for ADD and MUL. In this example, we'll simply set it to NONE.
The model then has 8 operands:
- 2 tensors that are inputs to the model. These are fed to the two ADD
operations.
- 2 constant tensors that are the other two inputs to the ADD operations.
- 1 fuse activation operand reused for the ADD operations and the MUL operation.
- 2 intermediate tensors, representing outputs of the ADD operations and inputs
to the MUL operation.
- 1 model output.
## Screenshots
<img src="screenshot.png" width="480">

View File

@@ -1,35 +0,0 @@
plugins {
id "ndksamples.android.application"
id 'ndksamples.android.kotlin'
}
android {
namespace 'com.example.android.basic'
defaultConfig {
applicationId "com.example.android.basic"
minSdkVersion 27
versionCode 1
versionName "1.0"
}
externalNativeBuild {
cmake {
path "src/main/cpp/CMakeLists.txt"
}
}
buildFeatures {
viewBinding true
}
androidResources {
noCompress 'bin'
}
}
dependencies {
implementation libs.androidx.constraintlayout
implementation libs.kotlinx.coroutines.core
implementation libs.kotlinx.coroutines.android
}

View File

@@ -1,21 +0,0 @@
# Add project specific ProGuard rules here.
# You can control the set of applied configuration files using the
# proguardFiles setting in build.gradle.
#
# For more details, see
# http://developer.android.com/guide/developing/tools/proguard.html
# If your project uses WebView with JS, uncomment the following
# and specify the fully qualified class name to the JavaScript interface
# class:
#-keepclassmembers class fqcn.of.javascript.interface.for.webview {
# public *;
#}
# Uncomment this to preserve the line number information for
# debugging stack traces.
#-keepattributes SourceFile,LineNumberTable
# If you keep the line number information, uncomment this to
# hide the original source file name.
#-renamesourcefileattribute SourceFile

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

View File

@@ -1,20 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity"
android:exported="true">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>

View File

@@ -1,15 +0,0 @@
cmake_minimum_required(VERSION 3.22.1)
project(NnSamplesBasic LANGUAGES CXX)
add_library(basic
SHARED
nn_sample.cpp
simple_model.cpp
)
target_link_libraries(basic
# Link with libneuralnetworks.so for NN API
neuralnetworks
android
log
)

View File

@@ -1,75 +0,0 @@
/**
* Copyright 2017 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <android/asset_manager_jni.h>
#include <android/log.h>
#include <android/sharedmem.h>
#include <fcntl.h>
#include <jni.h>
#include <sys/mman.h>
#include <iomanip>
#include <sstream>
#include <string>
#include "simple_model.h"
extern "C" JNIEXPORT jlong JNICALL
Java_com_example_android_basic_MainActivity_initModel(JNIEnv* env,
jobject /* this */,
jobject _assetManager,
jstring _assetName) {
// Get the file descriptor of the model data file.
AAssetManager* assetManager = AAssetManager_fromJava(env, _assetManager);
const char* assetName = env->GetStringUTFChars(_assetName, NULL);
AAsset* asset =
AAssetManager_open(assetManager, assetName, AASSET_MODE_BUFFER);
if (asset == nullptr) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"Failed to open the asset.");
return 0;
}
env->ReleaseStringUTFChars(_assetName, assetName);
SimpleModel* nn_model = new SimpleModel(asset);
AAsset_close(asset);
if (!nn_model->CreateCompiledModel()) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"Failed to prepare the model.");
return 0;
}
return (jlong)(uintptr_t)nn_model;
}
extern "C" JNIEXPORT jfloat JNICALL
Java_com_example_android_basic_MainActivity_startCompute(JNIEnv* env,
jobject /* this */,
jlong _nnModel,
jfloat inputValue1,
jfloat inputValue2) {
SimpleModel* nn_model = (SimpleModel*)_nnModel;
float result = 0.0f;
nn_model->Compute(inputValue1, inputValue2, &result);
return result;
}
extern "C" JNIEXPORT void JNICALL
Java_com_example_android_basic_MainActivity_destroyModel(JNIEnv* env,
jobject /* this */,
jlong _nnModel) {
SimpleModel* nn_model = (SimpleModel*)_nnModel;
delete (nn_model);
}

View File

@@ -1,556 +0,0 @@
/**
* Copyright 2017 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "simple_model.h"
#include <android/asset_manager_jni.h>
#include <android/log.h>
#include <android/sharedmem.h>
#include <sys/mman.h>
#include <unistd.h>
#include <string>
namespace {
// Create ANeuralNetworksMemory from an asset file.
//
// Note that, at API level 30 or earlier, the NNAPI drivers may not have the
// permission to access the asset file. To work around this issue, here we will:
// 1. Allocate a large-enough shared memory to hold the model data;
// 2. Copy the asset file to the shared memory;
// 3. Create the NNAPI memory with the file descriptor of the shared memory.
ANeuralNetworksMemory* createMemoryFromAsset(AAsset* asset) {
// Allocate a large-enough shared memory to hold the model data.
off_t length = AAsset_getLength(asset);
int fd = ASharedMemory_create("model_data", length);
if (fd < 0) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ASharedMemory_create failed with size %d", length);
return nullptr;
}
// Copy the asset file to the shared memory.
void* data = mmap(nullptr, length, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if (data == nullptr) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"Failed to map a shared memory");
close(fd);
return nullptr;
}
AAsset_read(asset, data, length);
munmap(data, length);
// Create the NNAPI memory with the file descriptor of the shared memory.
ANeuralNetworksMemory* memory;
int status = ANeuralNetworksMemory_createFromFd(
length, PROT_READ | PROT_WRITE, fd, 0, &memory);
// It is safe to close the file descriptor here because
// ANeuralNetworksMemory_createFromFd will create a dup.
close(fd);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemory_createFromFd failed for trained weights");
return nullptr;
}
return memory;
}
} // namespace
/**
* SimpleModel Constructor.
*
* Initialize the member variables, including the shared memory objects.
*/
SimpleModel::SimpleModel(AAsset* asset)
: model_(nullptr), compilation_(nullptr), dimLength_(TENSOR_SIZE) {
tensorSize_ = dimLength_;
inputTensor1_.resize(tensorSize_);
// Create ANeuralNetworksMemory from a file containing the trained data.
memoryModel_ = createMemoryFromAsset(asset);
// Create ASharedMemory to hold the data for the second input tensor and
// output output tensor.
inputTensor2Fd_ = ASharedMemory_create("input2", tensorSize_ * sizeof(float));
outputTensorFd_ = ASharedMemory_create("output", tensorSize_ * sizeof(float));
// Create ANeuralNetworksMemory objects from the corresponding ASharedMemory
// objects.
int status =
ANeuralNetworksMemory_createFromFd(tensorSize_ * sizeof(float), PROT_READ,
inputTensor2Fd_, 0, &memoryInput2_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemory_createFromFd failed for Input2");
return;
}
status = ANeuralNetworksMemory_createFromFd(
tensorSize_ * sizeof(float), PROT_READ | PROT_WRITE, outputTensorFd_, 0,
&memoryOutput_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemory_createFromFd failed for Output");
return;
}
}
/**
* Create a graph that consists of three operations: two additions and a
* multiplication.
* The sums created by the additions are the inputs to the multiplication. In
* essence, we are creating a graph that computes:
* (tensor0 + tensor1) * (tensor2 + tensor3).
*
* tensor0 ---+
* +--- ADD ---> intermediateOutput0 ---+
* tensor1 ---+ |
* +--- MUL---> output
* tensor2 ---+ |
* +--- ADD ---> intermediateOutput1 ---+
* tensor3 ---+
*
* Two of the four tensors, tensor0 and tensor2 being added are constants,
* defined in the model. They represent the weights that would have been learned
* during a training process.
*
* The other two tensors, tensor1 and tensor3 will be inputs to the model. Their
* values will be provided when we execute the model. These values can change
* from execution to execution.
*
* Besides the two input tensors, an optional fused activation function can
* also be defined for ADD and MUL. In this example, we'll simply set it to
* NONE.
*
* The graph then has 10 operands:
* - 2 tensors that are inputs to the model. These are fed to the two
* ADD operations.
* - 2 constant tensors that are the other two inputs to the ADD operations.
* - 1 fuse activation operand reused for the ADD operations and the MUL
* operation.
* - 2 intermediate tensors, representing outputs of the ADD operations and
* inputs to the MUL operation.
* - 1 model output.
*
* @return true for success, false otherwise
*/
bool SimpleModel::CreateCompiledModel() {
int32_t status;
// Create the ANeuralNetworksModel handle.
status = ANeuralNetworksModel_create(&model_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_create failed");
return false;
}
uint32_t dimensions[] = {dimLength_};
ANeuralNetworksOperandType float32TensorType{
.type = ANEURALNETWORKS_TENSOR_FLOAT32,
.dimensionCount = sizeof(dimensions) / sizeof(dimensions[0]),
.dimensions = dimensions,
.scale = 0.0f,
.zeroPoint = 0,
};
ANeuralNetworksOperandType scalarInt32Type{
.type = ANEURALNETWORKS_INT32,
.dimensionCount = 0,
.dimensions = nullptr,
.scale = 0.0f,
.zeroPoint = 0,
};
/**
* Add operands and operations to construct the model.
*
* Operands are implicitly identified by the order in which they are added to
* the model, starting from 0.
*
* These indexes are not returned by the model_addOperand call. The
* application must manage these values. Here, we use opIdx to do the
* bookkeeping.
*/
uint32_t opIdx = 0;
// We first add the operand for the NONE activation function, and set its
// value to ANEURALNETWORKS_FUSED_NONE.
// This constant scalar operand will be used for all 3 operations.
status = ANeuralNetworksModel_addOperand(model_, &scalarInt32Type);
uint32_t fusedActivationFuncNone = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)",
fusedActivationFuncNone);
return false;
}
FuseCode fusedActivationCodeValue = ANEURALNETWORKS_FUSED_NONE;
status = ANeuralNetworksModel_setOperandValue(
model_, fusedActivationFuncNone, &fusedActivationCodeValue,
sizeof(fusedActivationCodeValue));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_setOperandValue failed for operand (%d)",
fusedActivationFuncNone);
return false;
}
// Add operands for the tensors.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t tensor0 = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)", tensor0);
return false;
}
// tensor0 is a constant tensor that was established during training.
// We read these values from the corresponding ANeuralNetworksMemory object.
status = ANeuralNetworksModel_setOperandValueFromMemory(
model_, tensor0, memoryModel_, 0, tensorSize_ * sizeof(float));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_setOperandValueFromMemory failed "
"for operand (%d)",
tensor0);
return false;
}
// tensor1 is one of the user provided input tensors to the trained model.
// Its value is determined pre-execution.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t tensor1 = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)", tensor1);
return false;
}
// tensor2 is a constant tensor that was established during training.
// We read these values from the corresponding ANeuralNetworksMemory object.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t tensor2 = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)", tensor2);
return false;
}
status = ANeuralNetworksModel_setOperandValueFromMemory(
model_, tensor2, memoryModel_, tensorSize_ * sizeof(float),
tensorSize_ * sizeof(float));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_setOperandValueFromMemory failed "
"for operand (%d)",
tensor2);
return false;
}
// tensor3 is one of the user provided input tensors to the trained model.
// Its value is determined pre-execution.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t tensor3 = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)", tensor3);
return false;
}
// intermediateOutput0 is the output of the first ADD operation.
// Its value is computed during execution.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t intermediateOutput0 = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)",
intermediateOutput0);
return false;
}
// intermediateOutput1 is the output of the second ADD operation.
// Its value is computed during execution.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t intermediateOutput1 = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)",
intermediateOutput1);
return false;
}
// multiplierOutput is the output of the MUL operation.
// Its value will be computed during execution.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t multiplierOutput = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)",
multiplierOutput);
return false;
}
// Add the first ADD operation.
std::vector<uint32_t> add1InputOperands = {
tensor0,
tensor1,
fusedActivationFuncNone,
};
status = ANeuralNetworksModel_addOperation(
model_, ANEURALNETWORKS_ADD, add1InputOperands.size(),
add1InputOperands.data(), 1, &intermediateOutput0);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperation failed for ADD_1");
return false;
}
// Add the second ADD operation.
// Note the fusedActivationFuncNone is used again.
std::vector<uint32_t> add2InputOperands = {
tensor2,
tensor3,
fusedActivationFuncNone,
};
status = ANeuralNetworksModel_addOperation(
model_, ANEURALNETWORKS_ADD, add2InputOperands.size(),
add2InputOperands.data(), 1, &intermediateOutput1);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperation failed for ADD_2");
return false;
}
// Add the MUL operation.
// Note that intermediateOutput0 and intermediateOutput1 are specified
// as inputs to the operation.
std::vector<uint32_t> mulInputOperands = {
intermediateOutput0, intermediateOutput1, fusedActivationFuncNone};
status = ANeuralNetworksModel_addOperation(
model_, ANEURALNETWORKS_MUL, mulInputOperands.size(),
mulInputOperands.data(), 1, &multiplierOutput);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperation failed for MUL");
return false;
}
// Identify the input and output tensors to the model.
// Inputs: {tensor1, tensor3}
// Outputs: {multiplierOutput}
std::vector<uint32_t> modelInputOperands = {
tensor1,
tensor3,
};
status = ANeuralNetworksModel_identifyInputsAndOutputs(
model_, modelInputOperands.size(), modelInputOperands.data(), 1,
&multiplierOutput);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_identifyInputsAndOutputs failed");
return false;
}
// Finish constructing the model.
// The values of constant and intermediate operands cannot be altered after
// the finish function is called.
status = ANeuralNetworksModel_finish(model_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_finish failed");
return false;
}
// Create the ANeuralNetworksCompilation object for the constructed model.
status = ANeuralNetworksCompilation_create(model_, &compilation_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksCompilation_create failed");
return false;
}
// Set the preference for the compilation, so that the runtime and drivers
// can make better decisions.
// Here we prefer to get the answer quickly, so we choose
// ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWER.
status = ANeuralNetworksCompilation_setPreference(
compilation_, ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWER);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksCompilation_setPreference failed");
return false;
}
// Finish the compilation.
status = ANeuralNetworksCompilation_finish(compilation_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksCompilation_finish failed");
return false;
}
return true;
}
/**
* Compute with the given input data.
* @param modelInputs:
* inputValue1: The values to fill tensor1
* inputValue2: The values to fill tensor3
* @return computed result, or 0.0f if there is error.
*/
bool SimpleModel::Compute(float inputValue1, float inputValue2, float* result) {
if (!result) {
return false;
}
// Create an ANeuralNetworksExecution object from the compiled model.
// Note:
// 1. All the input and output data are tied to the ANeuralNetworksExecution
// object.
// 2. Multiple concurrent execution instances could be created from the same
// compiled model.
// This sample only uses one execution of the compiled model.
ANeuralNetworksExecution* execution;
int32_t status = ANeuralNetworksExecution_create(compilation_, &execution);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksExecution_create failed");
return false;
}
// Set all the elements of the first input tensor (tensor1) to the same value
// as inputValue1. It's not a realistic example but it shows how to pass a
// small tensor to an execution.
std::fill(inputTensor1_.data(), inputTensor1_.data() + tensorSize_,
inputValue1);
// Tell the execution to associate inputTensor1 to the first of the two model
// inputs. Note that the index "0" here means the first operand of the
// modelInput list {tensor1, tensor3}, which means tensor1.
status = ANeuralNetworksExecution_setInput(
execution, 0, nullptr, inputTensor1_.data(), tensorSize_ * sizeof(float));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksExecution_setInput failed for input1");
return false;
}
// Set the values of the second input operand (tensor3) to be inputValue2.
// In reality, the values in the shared memory region will be manipulated by
// other modules or processes.
float* inputTensor2Ptr = reinterpret_cast<float*>(
mmap(nullptr, tensorSize_ * sizeof(float), PROT_READ | PROT_WRITE,
MAP_SHARED, inputTensor2Fd_, 0));
for (int i = 0; i < tensorSize_; i++) {
*inputTensor2Ptr = inputValue2;
inputTensor2Ptr++;
}
munmap(inputTensor2Ptr, tensorSize_ * sizeof(float));
// ANeuralNetworksExecution_setInputFromMemory associates the operand with a
// shared memory region to minimize the number of copies of raw data. Note
// that the index "1" here means the second operand of the modelInput list
// {tensor1, tensor3}, which means tensor3.
status = ANeuralNetworksExecution_setInputFromMemory(
execution, 1, nullptr, memoryInput2_, 0, tensorSize_ * sizeof(float));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksExecution_setInputFromMemory failed for input2");
return false;
}
// Set the output tensor that will be filled by executing the model.
// We use shared memory here to minimize the copies needed for getting the
// output data.
status = ANeuralNetworksExecution_setOutputFromMemory(
execution, 0, nullptr, memoryOutput_, 0, tensorSize_ * sizeof(float));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksExecution_setOutputFromMemory failed for output");
return false;
}
// Start the execution of the model.
// Note that the execution here is asynchronous, and an ANeuralNetworksEvent
// object will be created to monitor the status of the execution.
ANeuralNetworksEvent* event = nullptr;
status = ANeuralNetworksExecution_startCompute(execution, &event);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksExecution_startCompute failed");
return false;
}
// Wait until the completion of the execution. This could be done on a
// different thread. By waiting immediately, we effectively make this a
// synchronous call.
status = ANeuralNetworksEvent_wait(event);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksEvent_wait failed");
return false;
}
ANeuralNetworksEvent_free(event);
ANeuralNetworksExecution_free(execution);
// Validate the results.
const float goldenRef = (inputValue1 + 0.5f) * (inputValue2 + 0.5f);
float* outputTensorPtr =
reinterpret_cast<float*>(mmap(nullptr, tensorSize_ * sizeof(float),
PROT_READ, MAP_SHARED, outputTensorFd_, 0));
for (int32_t idx = 0; idx < tensorSize_; idx++) {
float delta = outputTensorPtr[idx] - goldenRef;
delta = (delta < 0.0f) ? (-delta) : delta;
if (delta > FLOAT_EPISILON) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"Output computation Error: output0(%f), delta(%f) @ idx(%d)",
outputTensorPtr[0], delta, idx);
}
}
*result = outputTensorPtr[0];
munmap(outputTensorPtr, tensorSize_ * sizeof(float));
return result;
}
/**
* SimpleModel Destructor.
*
* Release NN API objects and close the file descriptors.
*/
SimpleModel::~SimpleModel() {
ANeuralNetworksCompilation_free(compilation_);
ANeuralNetworksModel_free(model_);
ANeuralNetworksMemory_free(memoryModel_);
ANeuralNetworksMemory_free(memoryInput2_);
ANeuralNetworksMemory_free(memoryOutput_);
close(inputTensor2Fd_);
close(outputTensorFd_);
}

View File

@@ -1,64 +0,0 @@
/**
* Copyright 2017 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef NNAPI_SIMPLE_MODEL_H
#define NNAPI_SIMPLE_MODEL_H
#include <android/NeuralNetworks.h>
#include <android/asset_manager_jni.h>
#include <vector>
#define FLOAT_EPISILON (1e-6)
#define TENSOR_SIZE 200
#define LOG_TAG "NNAPI_BASIC"
/**
* SimpleModel
* Build up the hardcoded graph of
* ADD_1 ---+
* +--- MUL--->output result
* ADD_2 ---+
*
* Operands are all 2-D TENSOR_FLOAT32 of:
* dimLength x dimLength
* with NO fused_activation operation
*
*/
class SimpleModel {
public:
explicit SimpleModel(AAsset* asset);
~SimpleModel();
bool CreateCompiledModel();
bool Compute(float inputValue1, float inputValue2, float* result);
private:
ANeuralNetworksModel* model_;
ANeuralNetworksCompilation* compilation_;
ANeuralNetworksMemory* memoryModel_;
ANeuralNetworksMemory* memoryInput2_;
ANeuralNetworksMemory* memoryOutput_;
uint32_t dimLength_;
uint32_t tensorSize_;
std::vector<float> inputTensor1_;
int inputTensor2Fd_;
int outputTensorFd_;
};
#endif // NNAPI_SIMPLE_MODEL_H

View File

@@ -1,85 +0,0 @@
/**
* Copyright 2017 The Android Open Source Project
*
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
*
* http://www.apache.org/licenses/LICENSE-2.0
*
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.example.android.basic
import android.app.Activity
import android.content.res.AssetManager
import android.os.Bundle
import android.widget.Toast
import com.example.android.basic.databinding.ActivityMainBinding
import kotlinx.coroutines.*
/*
MainActivity to take care of UI and user inputs
*/
class MainActivity : Activity() {
private var modelHandle = 0L
/*
3 JNI functions managing NN models, refer to basic/README.md
for model structure
*/
private external fun initModel(assetManager: AssetManager?, assetName: String?): Long
private external fun startCompute(modelHandle: Long, input1: Float, input2: Float): Float
private external fun destroyModel(modelHandle: Long)
private lateinit var binding: ActivityMainBinding
private val activityJob = Job()
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = ActivityMainBinding.inflate(layoutInflater)
setContentView(binding.root)
CoroutineScope(Dispatchers.IO + activityJob).async(Dispatchers.IO) {
modelHandle = this@MainActivity.initModel(assets, "model_data.bin")
}
binding.computButton.setOnClickListener {
if (modelHandle == 0L) {
Toast.makeText(applicationContext, "Model initializing, please wait",
Toast.LENGTH_SHORT).show()
}
if (binding.tensorSeed0.text.isNotEmpty() && binding.tensorSeed2.text.isNotEmpty()) {
Toast.makeText(applicationContext, "Computing", Toast.LENGTH_SHORT).show()
binding.computeResult.text = runBlocking {
val operand0 = binding.tensorSeed0.text.toString().toFloat()
val operand2 = binding.tensorSeed2.text.toString().toFloat()
startCompute(modelHandle, operand0, operand2).toString()
}.toString()
}
}
}
override fun onDestroy() {
activityJob.cancel()
if (modelHandle != 0L) {
destroyModel(modelHandle)
modelHandle = 0
}
super.onDestroy()
}
companion object {
init {
System.loadLibrary("basic")
}
}
}

View File

@@ -1,34 +0,0 @@
<vector xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:aapt="http://schemas.android.com/aapt"
android:width="108dp"
android:height="108dp"
android:viewportHeight="108"
android:viewportWidth="108">
<path
android:fillType="evenOdd"
android:pathData="M32,64C32,64 38.39,52.99 44.13,50.95C51.37,48.37 70.14,49.57 70.14,49.57L108.26,87.69L108,109.01L75.97,107.97L32,64Z"
android:strokeColor="#00000000"
android:strokeWidth="1">
<aapt:attr name="android:fillColor">
<gradient
android:endX="78.5885"
android:endY="90.9159"
android:startX="48.7653"
android:startY="61.0927"
android:type="linear">
<item
android:color="#44000000"
android:offset="0.0" />
<item
android:color="#00000000"
android:offset="1.0" />
</gradient>
</aapt:attr>
</path>
<path
android:fillColor="#FFFFFF"
android:fillType="nonZero"
android:pathData="M66.94,46.02L66.94,46.02C72.44,50.07 76,56.61 76,64L32,64C32,56.61 35.56,50.11 40.98,46.06L36.18,41.19C35.45,40.45 35.45,39.3 36.18,38.56C36.91,37.81 38.05,37.81 38.78,38.56L44.25,44.05C47.18,42.57 50.48,41.71 54,41.71C57.48,41.71 60.78,42.57 63.68,44.05L69.11,38.56C69.84,37.81 70.98,37.81 71.71,38.56C72.44,39.3 72.44,40.45 71.71,41.19L66.94,46.02ZM62.94,56.92C64.08,56.92 65,56.01 65,54.88C65,53.76 64.08,52.85 62.94,52.85C61.8,52.85 60.88,53.76 60.88,54.88C60.88,56.01 61.8,56.92 62.94,56.92ZM45.06,56.92C46.2,56.92 47.13,56.01 47.13,54.88C47.13,53.76 46.2,52.85 45.06,52.85C43.92,52.85 43,53.76 43,54.88C43,56.01 43.92,56.92 45.06,56.92Z"
android:strokeColor="#00000000"
android:strokeWidth="1" />
</vector>

View File

@@ -1,170 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<vector xmlns:android="http://schemas.android.com/apk/res/android"
android:width="108dp"
android:height="108dp"
android:viewportHeight="108"
android:viewportWidth="108">
<path
android:fillColor="#26A69A"
android:pathData="M0,0h108v108h-108z" />
<path
android:fillColor="#00000000"
android:pathData="M9,0L9,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,0L19,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M29,0L29,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M39,0L39,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M49,0L49,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M59,0L59,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M69,0L69,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M79,0L79,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M89,0L89,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M99,0L99,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,9L108,9"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,19L108,19"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,29L108,29"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,39L108,39"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,49L108,49"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,59L108,59"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,69L108,69"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,79L108,79"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,89L108,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,99L108,99"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,29L89,29"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,39L89,39"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,49L89,49"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,59L89,59"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,69L89,69"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,79L89,79"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M29,19L29,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M39,19L39,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M49,19L49,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M59,19L59,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M69,19L69,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M79,19L79,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
</vector>

View File

@@ -1,115 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context="com.example.android.basic.MainActivity">
<Button
android:id="@+id/computButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginBottom="52dp"
android:layout_marginTop="8dp"
android:text="@string/compute"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/computeResult"
tools:text="@string/compute" />
<EditText
android:id="@+id/tensorSeed0"
android:layout_width="161dp"
android:layout_height="wrap_content"
android:layout_marginEnd="96dp"
android:layout_marginTop="24dp"
android:ems="10"
android:inputType="numberDecimal"
android:textAlignment="center"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<EditText
android:id="@+id/tensorSeed2"
android:layout_width="161dp"
android:layout_height="wrap_content"
android:layout_marginStart="8dp"
android:layout_marginTop="20dp"
android:ems="10"
android:inputType="numberDecimal"
android:textAlignment="center"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="@+id/tensorSeed0"
app:layout_constraintHorizontal_bias="1.0"
app:layout_constraintStart_toStartOf="@+id/tensorSeed0"
app:layout_constraintTop_toBottomOf="@+id/tensorSeed0" />
<TextView
android:id="@+id/computeResult"
android:layout_width="161dp"
android:layout_height="32dp"
android:layout_marginEnd="8dp"
android:layout_marginTop="104dp"
android:text="@string/none"
android:textAlignment="center"
android:textAllCaps="false"
android:textAppearance="@android:style/TextAppearance"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="@+id/tensorSeed2"
app:layout_constraintHorizontal_bias="0.0"
app:layout_constraintStart_toStartOf="@+id/tensorSeed2"
app:layout_constraintTop_toBottomOf="@+id/tensorSeed2"
tools:text="@string/none" />
<TextView
android:id="@+id/resultLabel"
android:layout_width="wrap_content"
android:layout_height="32dp"
android:layout_marginStart="8dp"
android:layout_marginTop="104dp"
android:text="@string/result"
android:textAppearance="@android:style/TextAppearance"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="@+id/tensorLabel2"
app:layout_constraintHorizontal_bias="1.0"
app:layout_constraintStart_toStartOf="@+id/tensorLabel2"
app:layout_constraintTop_toBottomOf="@+id/tensorLabel2"
tools:text="@string/result" />
<TextView
android:id="@+id/tensorLabel0"
android:layout_width="wrap_content"
android:layout_height="32dp"
android:layout_marginStart="16dp"
android:layout_marginTop="32dp"
android:layout_marginEnd="8dp"
android:text="@string/label0"
android:textAppearance="@android:style/TextAppearance"
android:textSize="18sp"
android:visibility="visible"
app:layout_constraintEnd_toStartOf="@+id/tensorSeed0"
app:layout_constraintHorizontal_bias="0.446"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
tools:text="@string/label0" />
<TextView
android:id="@+id/tensorLabel2"
android:layout_width="wrap_content"
android:layout_height="32dp"
android:layout_marginStart="8dp"
android:layout_marginTop="36dp"
android:text="@string/label2"
android:textAppearance="@android:style/TextAppearance"
android:textSize="18sp"
android:visibility="visible"
app:layout_constraintEnd_toEndOf="@+id/tensorLabel0"
app:layout_constraintHorizontal_bias="1.0"
app:layout_constraintStart_toStartOf="@+id/tensorLabel0"
app:layout_constraintTop_toBottomOf="@+id/tensorLabel0"
tools:text="@string/label2" />
</androidx.constraintlayout.widget.ConstraintLayout>

View File

@@ -1,5 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android">
<background android:drawable="@drawable/ic_launcher_background" />
<foreground android:drawable="@drawable/ic_launcher_foreground" />
</adaptive-icon>

View File

@@ -1,5 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android">
<background android:drawable="@drawable/ic_launcher_background" />
<foreground android:drawable="@drawable/ic_launcher_foreground" />
</adaptive-icon>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -1,6 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="colorPrimary">#3F51B5</color>
<color name="colorPrimaryDark">#303F9F</color>
<color name="colorAccent">#FF4081</color>
</resources>

View File

@@ -1,8 +0,0 @@
<resources>
<string name="app_name">NN API Demo: basic</string>
<string name="compute">Compute</string>
<string name="result">Result: </string>
<string name="label0">Augend0: </string>
<string name="label2">Augend2: </string>
<string name="none">None</string>
</resources>

View File

@@ -1,8 +0,0 @@
<resources>
<!-- Base application theme. -->
<style name="AppTheme" parent="android:Theme.Material.Light.DarkActionBar">
<!-- Customize your theme here. -->
</style>
</resources>

View File

@@ -1 +0,0 @@
/build

View File

@@ -1,42 +0,0 @@
# Android Neural Networks API Sample: Sequence
Android Neural Networks API (NN API) Sample demonstrates basic usages of NN API
with a sequence model that consists of two operations: one addition and one
multiplication. This graph is used for computing a single step of accumulating a
geometric progression.
```
sumIn ---+
+--- ADD ---> sumOut
stateIn ---+
+--- MUL ---> stateOut
ratio ---+
```
The ratio is a constant tensor, defined in the model. It represents the weights
that would have been learned during a training process. The sumIn and stateIn
are input tensors. Their values will be provided when we execute the model.
These values can change from execution to execution. To compute the sum of a
geometric progression, the graph will be executed multiple times with inputs and
outputs chained together.
```
+----------+ +----------+ +----------+
initialSum -->| Simple |-->| Simple |--> -->| Simple |--> sumOut
| Sequence | | Sequence | ... | Sequence |
initialState -->| Model |-->| Model |--> -->| Model |--> stateOut
+----------+ +----------+ +----------+
```
## Additional Requirements
- Android 11 SDK to compile
- A device running Android 11
Note: This sample uses its own wrapper to access new NNAPI features in Android
11 due to an known issue. This will be updated after the issue is fixed with the
next R SDK release.
## Screenshots
<img src="screenshot.png" width="480">

View File

@@ -1,28 +0,0 @@
plugins {
id "ndksamples.android.application"
}
android {
namespace 'com.example.android.sequence'
defaultConfig {
applicationId "com.example.android.sequence"
minSdkVersion 30
versionCode 1
versionName "1.0"
}
externalNativeBuild {
cmake {
path "src/main/cpp/CMakeLists.txt"
}
}
androidResources {
noCompress 'bin'
}
}
dependencies {
implementation libs.androidx.constraintlayout
}

View File

@@ -1,21 +0,0 @@
# Add project specific ProGuard rules here.
# You can control the set of applied configuration files using the
# proguardFiles setting in build.gradle.
#
# For more details, see
# http://developer.android.com/guide/developing/tools/proguard.html
# If your project uses WebView with JS, uncomment the following
# and specify the fully qualified class name to the JavaScript interface
# class:
#-keepclassmembers class fqcn.of.javascript.interface.for.webview {
# public *;
#}
# Uncomment this to preserve the line number information for
# debugging stack traces.
#-keepattributes SourceFile,LineNumberTable
# If you keep the line number information, uncomment this to
# hide the original source file name.
#-renamesourcefileattribute SourceFile

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

View File

@@ -1,20 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity"
android:exported="true">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>

View File

@@ -1,15 +0,0 @@
cmake_minimum_required(VERSION 3.22.1)
project(NnSamplesSequence LANGUAGES CXX)
add_library(sequence
SHARED
sequence.cpp
sequence_model.cpp
)
target_link_libraries(sequence
# Link with libneuralnetworks.so for NN API
neuralnetworks
android
log
)

View File

@@ -1,62 +0,0 @@
/**
* Copyright 2020 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <android/asset_manager_jni.h>
#include <android/log.h>
#include <android/sharedmem.h>
#include <fcntl.h>
#include <jni.h>
#include <sys/mman.h>
#include <iomanip>
#include <sstream>
#include <string>
#include "sequence_model.h"
extern "C" JNIEXPORT jlong JNICALL
Java_com_example_android_sequence_MainActivity_initModel(JNIEnv* env,
jobject /* this */,
jfloat ratio) {
auto model = SimpleSequenceModel::Create(ratio);
if (model == nullptr) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"Failed to create the model.");
return 0;
}
return (jlong)(uintptr_t)model.release();
}
extern "C" JNIEXPORT jfloat JNICALL
Java_com_example_android_sequence_MainActivity_compute(JNIEnv* env,
jobject /* this */,
jfloat initialValue,
jint steps,
jlong _nnModel) {
SimpleSequenceModel* nn_model = (SimpleSequenceModel*)_nnModel;
float result = 0.0f;
nn_model->Compute(initialValue, static_cast<uint32_t>(steps), &result);
return result;
}
extern "C" JNIEXPORT void JNICALL
Java_com_example_android_sequence_MainActivity_destroyModel(JNIEnv* env,
jobject /* this */,
jlong _nnModel) {
SimpleSequenceModel* nn_model = (SimpleSequenceModel*)_nnModel;
delete (nn_model);
}

View File

@@ -1,724 +0,0 @@
/**
* Copyright 2020 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "sequence_model.h"
#include <android/log.h>
#include <android/sharedmem.h>
#include <sys/mman.h>
#include <unistd.h>
#include <algorithm>
#include <string>
#include <utility>
#include <vector>
/**
* A helper method to allocate an ASharedMemory region and create an
* ANeuralNetworksMemory object.
*/
static std::pair<int, ANeuralNetworksMemory*> CreateASharedMemory(
const char* name, uint32_t size, int prot) {
int fd = ASharedMemory_create(name, size * sizeof(float));
// Create an ANeuralNetworksMemory object from the corresponding ASharedMemory
// objects.
ANeuralNetworksMemory* memory = nullptr;
int32_t status = ANeuralNetworksMemory_createFromFd(size * sizeof(float),
prot, fd, 0, &memory);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemory_createFromFd failed for %s",
name);
close(fd);
return {-1, nullptr};
}
return {fd, memory};
}
/**
* A helper method to fill the ASharedMemory region with the given value.
*/
static void fillMemory(int fd, uint32_t size, float value) {
// Set the values of the memory.
// In reality, the values in the shared memory region will be manipulated by
// other modules or processes.
float* data =
reinterpret_cast<float*>(mmap(nullptr, size * sizeof(float),
PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0));
std::fill(data, data + size, value);
munmap(data, size * sizeof(float));
}
/**
* Factory method of SimpleSequenceModel.
*
* Create and initialize the model, compilation, and memories associated
* with the computation graph.
*
* @return A pointer to the created model on success, nullptr otherwise
*/
std::unique_ptr<SimpleSequenceModel> SimpleSequenceModel::Create(float ratio) {
auto model = std::make_unique<SimpleSequenceModel>(ratio);
if (model->CreateSharedMemories() && model->CreateModel() &&
model->CreateCompilation() && model->CreateOpaqueMemories()) {
return model;
}
return nullptr;
}
/**
* SimpleSequenceModel Constructor.
*/
SimpleSequenceModel::SimpleSequenceModel(float ratio) : ratio_(ratio) {}
/**
* Initialize the shared memory objects. In reality, the values in the shared
* memory region will be manipulated by other modules or processes.
*
* @return true for success, false otherwise
*/
bool SimpleSequenceModel::CreateSharedMemories() {
// Create ASharedMemory to hold the data for initial state, ratio, and sums.
std::tie(initialStateFd_, memoryInitialState_) =
CreateASharedMemory("initialState", tensorSize_, PROT_READ);
std::tie(ratioFd_, memoryRatio_) =
CreateASharedMemory("ratio", tensorSize_, PROT_READ);
std::tie(sumInFd_, memorySumIn_) =
CreateASharedMemory("sumIn", tensorSize_, PROT_READ | PROT_WRITE);
std::tie(sumOutFd_, memorySumOut_) =
CreateASharedMemory("sumOut", tensorSize_, PROT_READ | PROT_WRITE);
// Initialize the ratio tensor.
fillMemory(ratioFd_, tensorSize_, ratio_);
return true;
}
/**
* Create a graph that consists of two operations: one addition and one
* multiplication. This graph is used for computing a single step of
* accumulating a geometric progression.
*
* sumIn ---+
* +--- ADD ---> sumOut
* stateIn ---+
* +--- MUL ---> stateOut
* ratio ---+
*
* The ratio is a constant tensor, defined in the model. It represents the
* weights that would have been learned during a training process.
*
* The sumIn and stateIn are input tensors. Their values will be provided when
* we execute the model. These values can change from execution to execution.
*
* To compute the sum of a geometric progression, the graph will be executed
* multiple times with inputs and outputs chained together.
*
* +----------+ +----------+ +----------+
* initialSum -->| Simple |-->| Simple |--> -->| Simple |--> sumOut
* | Sequence | | Sequence | ... | Sequence |
* initialState -->| Model |-->| Model |--> -->| Model |--> stateOut
* +----------+ +----------+ +----------+
*
* @return true for success, false otherwise
*/
bool SimpleSequenceModel::CreateModel() {
int32_t status;
// Create the ANeuralNetworksModel handle.
status = ANeuralNetworksModel_create(&model_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_create failed");
return false;
}
uint32_t dimensions[] = {dimLength_, dimLength_};
ANeuralNetworksOperandType float32TensorType{
.type = ANEURALNETWORKS_TENSOR_FLOAT32,
.dimensionCount = sizeof(dimensions) / sizeof(dimensions[0]),
.dimensions = dimensions,
.scale = 0.0f,
.zeroPoint = 0,
};
ANeuralNetworksOperandType scalarInt32Type{
.type = ANEURALNETWORKS_INT32,
.dimensionCount = 0,
.dimensions = nullptr,
.scale = 0.0f,
.zeroPoint = 0,
};
/**
* Add operands and operations to construct the model.
*
* Operands are implicitly identified by the order in which they are added to
* the model, starting from 0.
*
* These indexes are not returned by the model_addOperand call. The
* application must manage these values. Here, we use opIdx to do the
* bookkeeping.
*/
uint32_t opIdx = 0;
// We first add the operand for the NONE activation function, and set its
// value to ANEURALNETWORKS_FUSED_NONE.
// This constant scalar operand will be used for both ADD and MUL.
status = ANeuralNetworksModel_addOperand(model_, &scalarInt32Type);
uint32_t fusedActivationFuncNone = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)",
fusedActivationFuncNone);
return false;
}
FuseCode fusedActivationCodeValue = ANEURALNETWORKS_FUSED_NONE;
status = ANeuralNetworksModel_setOperandValue(
model_, fusedActivationFuncNone, &fusedActivationCodeValue,
sizeof(fusedActivationCodeValue));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_setOperandValue failed for operand (%d)",
fusedActivationFuncNone);
return false;
}
// sumIn is one of the user provided input tensors to the trained model.
// Its value is determined pre-execution.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t sumIn = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)", sumIn);
return false;
}
// stateIn is one of the user provided input tensors to the trained model.
// Its value is determined pre-execution.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t stateIn = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)", stateIn);
return false;
}
// ratio is a constant tensor that was established during training.
// We read these values from the corresponding ANeuralNetworksMemory object.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t ratio = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)", ratio);
return false;
}
status = ANeuralNetworksModel_setOperandValueFromMemory(
model_, ratio, memoryRatio_, 0, tensorSize_ * sizeof(float));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_setOperandValueFromMemory failed "
"for operand (%d)",
ratio);
return false;
}
// sumOut is the output of the ADD operation.
// Its value will be computed during execution.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t sumOut = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)", sumOut);
return false;
}
// stateOut is the output of the MUL operation.
// Its value will be computed during execution.
status = ANeuralNetworksModel_addOperand(model_, &float32TensorType);
uint32_t stateOut = opIdx++;
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperand failed for operand (%d)", stateOut);
return false;
}
// Add the ADD operation.
std::vector<uint32_t> addInputOperands = {
sumIn,
stateIn,
fusedActivationFuncNone,
};
status = ANeuralNetworksModel_addOperation(
model_, ANEURALNETWORKS_ADD, addInputOperands.size(),
addInputOperands.data(), 1, &sumOut);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperation failed for ADD");
return false;
}
// Add the MUL operation.
std::vector<uint32_t> mulInputOperands = {
stateIn,
ratio,
fusedActivationFuncNone,
};
status = ANeuralNetworksModel_addOperation(
model_, ANEURALNETWORKS_MUL, mulInputOperands.size(),
mulInputOperands.data(), 1, &stateOut);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_addOperation failed for MUL");
return false;
}
// Identify the input and output tensors to the model.
// Inputs: {sumIn, stateIn}
// Outputs: {sumOut, stateOut}
std::vector<uint32_t> modelInputs = {
sumIn,
stateIn,
};
std::vector<uint32_t> modelOutputs = {
sumOut,
stateOut,
};
status = ANeuralNetworksModel_identifyInputsAndOutputs(
model_, modelInputs.size(), modelInputs.data(), modelOutputs.size(),
modelOutputs.data());
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_identifyInputsAndOutputs failed");
return false;
}
// Finish constructing the model.
// The values of constant operands cannot be altered after
// the finish function is called.
status = ANeuralNetworksModel_finish(model_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksModel_finish failed");
return false;
}
return true;
}
/**
* Compile the model.
*
* @return true for success, false otherwise
*/
bool SimpleSequenceModel::CreateCompilation() {
int32_t status;
// Create the ANeuralNetworksCompilation object for the constructed model.
status = ANeuralNetworksCompilation_create(model_, &compilation_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksCompilation_create failed");
return false;
}
// Set the preference for the compilation_, so that the runtime and drivers
// can make better decisions.
// Here we prefer to get the answer quickly, so we choose
// ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWER.
status = ANeuralNetworksCompilation_setPreference(
compilation_, ANEURALNETWORKS_PREFER_FAST_SINGLE_ANSWER);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksCompilation_setPreference failed");
return false;
}
// Finish the compilation.
status = ANeuralNetworksCompilation_finish(compilation_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksCompilation_finish failed");
return false;
}
return true;
}
/**
* Create and initialize the opaque memory objects.
*
* Opaque memories are suitable for memories that are internal to NNAPI,
* e.g. state tensors or intermediate results. Using opaque memories may
* reduce the data copying and transformation overhead.
*
* In this example, only the initial sum, the initial state, and the final sum
* are interesting to us. We do not need to know the intermediate results. So,
* we create two pairs of opaque memories for intermediate sums and states.
*
* @return true for success, false otherwise
*/
bool SimpleSequenceModel::CreateOpaqueMemories() {
int32_t status;
// Create opaque memories for sum tensors.
// We start from creating a memory descriptor and describing all of the
// intended memory usages.
ANeuralNetworksMemoryDesc* sumDesc = nullptr;
status = ANeuralNetworksMemoryDesc_create(&sumDesc);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemoryDesc_create failed");
return false;
}
// Specify that the state memory will be used as the first input (sumIn)
// of the compilation. Note that the index "0" here means the first operand
// of the modelInputs list {sumIn, stateIn}, which means sumIn.
status =
ANeuralNetworksMemoryDesc_addInputRole(sumDesc, compilation_, 0, 1.0f);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemoryDesc_addInputRole failed");
ANeuralNetworksMemoryDesc_free(sumDesc);
return false;
}
// Specify that the state memory will also be used as the first output
// (sumOut) of the compilation. Note that the index "0" here means the
// first operand of the modelOutputs list {sumOut, stateOut}, which means
// sumOut.
status =
ANeuralNetworksMemoryDesc_addOutputRole(sumDesc, compilation_, 0, 1.0f);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemoryDesc_addOutputRole failed");
ANeuralNetworksMemoryDesc_free(sumDesc);
return false;
}
// Finish the memory descriptor.
status = ANeuralNetworksMemoryDesc_finish(sumDesc);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemoryDesc_finish failed");
ANeuralNetworksMemoryDesc_free(sumDesc);
return false;
}
// Create two opaque memories from the finished descriptor: one for input
// and one for output. We will swap the two memories after each single
// execution step.
status = ANeuralNetworksMemory_createFromDesc(sumDesc, &memoryOpaqueSumIn_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemory_createFromDesc failed for sum memory #1");
ANeuralNetworksMemoryDesc_free(sumDesc);
return false;
}
status = ANeuralNetworksMemory_createFromDesc(sumDesc, &memoryOpaqueSumOut_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemory_createFromDesc failed for sum memory #2");
ANeuralNetworksMemoryDesc_free(sumDesc);
return false;
}
// It is safe to free the memory descriptor once all of the memories have
// been created.
ANeuralNetworksMemoryDesc_free(sumDesc);
// Create opaque memories for state tensors.
// We start from creating a memory descriptor and describing all of the
// intended memory usages.
ANeuralNetworksMemoryDesc* stateDesc = nullptr;
status = ANeuralNetworksMemoryDesc_create(&stateDesc);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemoryDesc_create failed");
return false;
}
// Specify that the state memory will be used as the second input (stateIn)
// of the compilation. Note that the index "1" here means the second operand
// of the modelInputs list {sumIn, stateIn}, which means stateIn.
status =
ANeuralNetworksMemoryDesc_addInputRole(stateDesc, compilation_, 1, 1.0f);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemoryDesc_addInputRole failed");
ANeuralNetworksMemoryDesc_free(stateDesc);
return false;
}
// Specify that the state memory will also be used as the second output
// (stateOut) of the compilation. Note that the index "1" here means the
// second operand of the modelOutputs list {sumOut, stateOut}, which means
// stateOut.
status =
ANeuralNetworksMemoryDesc_addOutputRole(stateDesc, compilation_, 1, 1.0f);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemoryDesc_addOutputRole failed");
ANeuralNetworksMemoryDesc_free(stateDesc);
return false;
}
// Finish the memory descriptor.
status = ANeuralNetworksMemoryDesc_finish(stateDesc);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemoryDesc_finish failed");
ANeuralNetworksMemoryDesc_free(stateDesc);
return false;
}
// Create two opaque memories from the finished descriptor: one for input
// and one for output. We will swap the two memories after each single
// execution step.
status =
ANeuralNetworksMemory_createFromDesc(stateDesc, &memoryOpaqueStateIn_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemory_createFromDesc failed for state memory #1");
ANeuralNetworksMemoryDesc_free(stateDesc);
return false;
}
status =
ANeuralNetworksMemory_createFromDesc(stateDesc, &memoryOpaqueStateOut_);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksMemory_createFromDesc failed for state memory #2");
ANeuralNetworksMemoryDesc_free(stateDesc);
return false;
}
// It is safe to free the memory descriptor once all of the memories have
// been created.
ANeuralNetworksMemoryDesc_free(stateDesc);
return true;
}
/**
* Dispatch a single computation step of accumulating the geometric progression.
*/
static bool DispatchSingleStep(
ANeuralNetworksCompilation* compilation, ANeuralNetworksMemory* sumIn,
uint32_t sumInLength, ANeuralNetworksMemory* stateIn,
uint32_t stateInLength, ANeuralNetworksMemory* sumOut,
uint32_t sumOutLength, ANeuralNetworksMemory* stateOut,
uint32_t stateOutLength, const ANeuralNetworksEvent* waitFor,
ANeuralNetworksEvent** event) {
// Create an ANeuralNetworksExecution object from the compiled model.
ANeuralNetworksExecution* execution;
int32_t status = ANeuralNetworksExecution_create(compilation, &execution);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksExecution_create failed");
return false;
}
// Set the memory for the sumIn tensor.
// Note that the index "0" here means the first operand of the modelInputs
// list {sumIn, stateIn}, which means sumIn.
status = ANeuralNetworksExecution_setInputFromMemory(
execution, 0, nullptr, sumIn, 0, sumInLength * sizeof(float));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksExecution_setInputFromMemory failed for sumIn");
return false;
}
// Set the memory for the stateIn tensor.
// Note that the index "1" here means the first operand of the modelInputs
// list {sumIn, stateIn}, which means stateIn.
status = ANeuralNetworksExecution_setInputFromMemory(
execution, 1, nullptr, stateIn, 0, stateInLength * sizeof(float));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksExecution_setInputFromMemory failed for stateIn");
return false;
}
// Set the sumOut tensor that will be filled by executing the model.
status = ANeuralNetworksExecution_setOutputFromMemory(
execution, 0, nullptr, sumOut, 0, sumOutLength * sizeof(float));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksExecution_setOutputFromMemory failed for sumOut");
return false;
}
// Set the stateOut tensor that will be filled by executing the model.
status = ANeuralNetworksExecution_setOutputFromMemory(
execution, 1, nullptr, stateOut, 0, stateOutLength * sizeof(float));
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(
ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksExecution_setOutputFromMemory failed for stateOut");
return false;
}
// Dispatch the execution of the model.
// Note that the execution here is asynchronous with dependencies.
const ANeuralNetworksEvent* const* dependencies = nullptr;
uint32_t numDependencies = 0;
if (waitFor != nullptr) {
dependencies = &waitFor;
numDependencies = 1;
}
status = ANeuralNetworksExecution_startComputeWithDependencies(
execution, dependencies, numDependencies,
0, // infinite timeout duration
event);
if (status != ANEURALNETWORKS_NO_ERROR) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"ANeuralNetworksExecution_compute failed");
return false;
}
ANeuralNetworksExecution_free(execution);
return true;
}
/**
* Compute the sum of a geometric progression.
*
* @param initialValue the initial value of the geometric progression
* @param steps the number of terms to accumulate
* @return computed result, or 0.0f if there is error.
*/
bool SimpleSequenceModel::Compute(float initialValue, uint32_t steps,
float* result) {
if (!result) {
return false;
}
if (steps == 0) {
*result = 0.0f;
return true;
}
// Setup initial values.
// In reality, the values in the shared memory region will be manipulated by
// other modules or processes.
fillMemory(sumInFd_, tensorSize_, 0);
fillMemory(initialStateFd_, tensorSize_, initialValue);
// The event objects for all computation steps.
std::vector<ANeuralNetworksEvent*> events(steps, nullptr);
for (uint32_t i = 0; i < steps; i++) {
// We will only use ASharedMemory for boundary step executions, and use
// opaque memories for intermediate results to minimize the data copying.
// Note that when setting an opaque memory as the input or output of an
// execution, the offset and length must be set to 0 to indicate the
// entire memory region is used.
ANeuralNetworksMemory* sumInMemory;
ANeuralNetworksMemory* sumOutMemory;
ANeuralNetworksMemory* stateInMemory;
ANeuralNetworksMemory* stateOutMemory;
uint32_t sumInLength, sumOutLength, stateInLength, stateOutLength;
if (i == 0) {
sumInMemory = memorySumIn_;
sumInLength = tensorSize_;
stateInMemory = memoryInitialState_;
stateInLength = tensorSize_;
} else {
sumInMemory = memoryOpaqueSumIn_;
sumInLength = 0;
stateInMemory = memoryOpaqueStateIn_;
stateInLength = 0;
}
if (i == steps - 1) {
sumOutMemory = memorySumOut_;
sumOutLength = tensorSize_;
} else {
sumOutMemory = memoryOpaqueSumOut_;
sumOutLength = 0;
}
stateOutMemory = memoryOpaqueStateOut_;
stateOutLength = 0;
// Dispatch a single computation step with a dependency on the previous
// step, if any. The actual computation will start once its dependency has
// finished.
const ANeuralNetworksEvent* waitFor = i == 0 ? nullptr : events[i - 1];
if (!DispatchSingleStep(compilation_, sumInMemory, sumInLength,
stateInMemory, stateInLength, sumOutMemory,
sumOutLength, stateOutMemory, stateOutLength,
waitFor, &events[i])) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG,
"DispatchSingleStep failed for step %d", i);
return false;
}
// Swap the memory handles: the outputs from the current step execution
// will be fed in as the inputs of the next step execution.
std::swap(memoryOpaqueSumIn_, memoryOpaqueSumOut_);
std::swap(memoryOpaqueStateIn_, memoryOpaqueStateOut_);
}
// Since the events are chained, we only need to wait for the last one.
ANeuralNetworksEvent_wait(events.back());
// Get the results.
float* outputTensorPtr =
reinterpret_cast<float*>(mmap(nullptr, tensorSize_ * sizeof(float),
PROT_READ, MAP_SHARED, sumOutFd_, 0));
*result = outputTensorPtr[0];
munmap(outputTensorPtr, tensorSize_ * sizeof(float));
// Cleanup event objects.
for (auto* event : events) {
ANeuralNetworksEvent_free(event);
}
return true;
}
/**
* SimpleSequenceModel Destructor.
*
* Release NN API objects and close the file descriptors.
*/
SimpleSequenceModel::~SimpleSequenceModel() {
ANeuralNetworksCompilation_free(compilation_);
ANeuralNetworksModel_free(model_);
ANeuralNetworksMemory_free(memorySumIn_);
ANeuralNetworksMemory_free(memorySumOut_);
ANeuralNetworksMemory_free(memoryInitialState_);
ANeuralNetworksMemory_free(memoryRatio_);
close(initialStateFd_);
close(sumInFd_);
close(sumOutFd_);
close(ratioFd_);
ANeuralNetworksMemory_free(memoryOpaqueStateIn_);
ANeuralNetworksMemory_free(memoryOpaqueStateOut_);
ANeuralNetworksMemory_free(memoryOpaqueSumIn_);
ANeuralNetworksMemory_free(memoryOpaqueSumOut_);
}

View File

@@ -1,87 +0,0 @@
/**
* Copyright 2020 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef NNAPI_SIMPLE_MODEL_H
#define NNAPI_SIMPLE_MODEL_H
// #include "neuralnetworks_wrapper.h"
#include <android/NeuralNetworks.h>
#include <memory>
/**
* SimpleSequenceModel
* Build up the hardcoded graph of
*
* sumIn ---+
* +--- ADD ---> sumOut
* stateIn ---+
* +--- MUL ---> stateOut
* ratio ---+
*
* Operands are all 2-D TENSOR_FLOAT32 of:
* dimLength x dimLength
* with NO fused_activation operation
*
* This graph is used for computing a single step of accumulating a finite
* geometry progression.
*
*/
class SimpleSequenceModel {
public:
static std::unique_ptr<SimpleSequenceModel> Create(float ratio);
// Prefer using SimpleSequenceModel::Create.
explicit SimpleSequenceModel(float ratio);
~SimpleSequenceModel();
bool Compute(float initialValue, uint32_t steps, float* result);
private:
bool CreateSharedMemories();
bool CreateModel();
bool CreateCompilation();
bool CreateOpaqueMemories();
ANeuralNetworksModel* model_ = nullptr;
ANeuralNetworksCompilation* compilation_ = nullptr;
static constexpr uint32_t dimLength_ = 200;
static constexpr uint32_t tensorSize_ = dimLength_ * dimLength_;
const float ratio_;
// ASharedMemories. In reality, the values in the shared memory region will
// be manipulated by other modules or processes.
int initialStateFd_ = -1;
int ratioFd_ = -1;
int sumInFd_ = -1;
int sumOutFd_ = -1;
ANeuralNetworksMemory* memoryInitialState_ = nullptr;
ANeuralNetworksMemory* memoryRatio_ = nullptr;
ANeuralNetworksMemory* memorySumIn_ = nullptr;
ANeuralNetworksMemory* memorySumOut_ = nullptr;
// Opaque memories.
ANeuralNetworksMemory* memoryOpaqueStateIn_ = nullptr;
ANeuralNetworksMemory* memoryOpaqueStateOut_ = nullptr;
ANeuralNetworksMemory* memoryOpaqueSumIn_ = nullptr;
ANeuralNetworksMemory* memoryOpaqueSumOut_ = nullptr;
};
#define LOG_TAG "NNAPI_SEQUENCE"
#endif // NNAPI_SIMPLE_MODEL_H

View File

@@ -1,139 +0,0 @@
/**
* Copyright 2020 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.example.android.sequence;
import android.app.Activity;
import android.os.AsyncTask;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import android.widget.Button;
import android.widget.EditText;
import android.widget.TextView;
import android.widget.Toast;
public class MainActivity extends Activity {
// Used to load the 'native-lib' library on application startup.
static { System.loadLibrary("sequence"); }
private final String LOG_TAG = "NNAPI_SEQUENCE";
private long modelHandle = 0;
public native long initModel(float ratio);
public native float compute(float initialValue, int steps, long modelHandle);
public native void destroyModel(long modelHandle);
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Button resetButton = findViewById(R.id.reset_button);
resetButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
EditText ratioInput = findViewById(R.id.ratio_input);
String ratioStr = ratioInput.getText().toString();
if (ratioStr.isEmpty()) {
Toast.makeText(getApplicationContext(), "Invalid ratio!", Toast.LENGTH_SHORT)
.show();
return;
}
if (modelHandle != 0) {
destroyModel(modelHandle);
modelHandle = 0;
}
TextView ratioText = findViewById(R.id.ratio_text);
TextView resultText = findViewById(R.id.result_text);
ratioText.setText(ratioStr);
resultText.setText(R.string.none);
new InitModelTask().execute(Float.valueOf(ratioStr));
}
});
Button computeButton = findViewById(R.id.compute_button);
computeButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (modelHandle != 0) {
EditText initialValueInput = findViewById(R.id.initial_value_input);
EditText stepsInput = findViewById(R.id.steps_input);
String initialValueStr = initialValueInput.getText().toString();
String stepsStr = stepsInput.getText().toString();
if (initialValueStr.isEmpty() || stepsStr.isEmpty()) {
Toast.makeText(getApplicationContext(), "Invalid initial value or steps!",
Toast.LENGTH_SHORT)
.show();
return;
}
new ComputeTask().execute(initialValueStr, stepsStr);
} else {
Toast.makeText(getApplicationContext(), "Model has not been initialized!",
Toast.LENGTH_SHORT)
.show();
}
}
});
}
@Override
protected void onDestroy() {
if (modelHandle != 0) {
destroyModel(modelHandle);
modelHandle = 0;
}
super.onDestroy();
}
private class InitModelTask extends AsyncTask<Float, Void, Long> {
@Override
protected Long doInBackground(Float... inputs) {
if (inputs.length != 1) {
Log.e(LOG_TAG, "Incorrect number of input values");
return 0L;
}
// Prepare the model in a separate thread.
return initModel(inputs[0]);
}
@Override
protected void onPostExecute(Long result) {
modelHandle = result;
}
}
private class ComputeTask extends AsyncTask<String, Void, Float> {
@Override
protected Float doInBackground(String... inputs) {
if (inputs.length != 2) {
Log.e(LOG_TAG, "Incorrect number of input values");
return 0.0f;
}
// Reusing the same prepared model with different inputs.
return compute(Float.valueOf(inputs[0]), Integer.valueOf(inputs[1]), modelHandle);
}
@Override
protected void onPostExecute(Float result) {
TextView tv = findViewById(R.id.result_text);
tv.setText(String.valueOf(result));
}
}
}

View File

@@ -1,34 +0,0 @@
<vector xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:aapt="http://schemas.android.com/aapt"
android:width="108dp"
android:height="108dp"
android:viewportHeight="108"
android:viewportWidth="108">
<path
android:fillType="evenOdd"
android:pathData="M32,64C32,64 38.39,52.99 44.13,50.95C51.37,48.37 70.14,49.57 70.14,49.57L108.26,87.69L108,109.01L75.97,107.97L32,64Z"
android:strokeColor="#00000000"
android:strokeWidth="1">
<aapt:attr name="android:fillColor">
<gradient
android:endX="78.5885"
android:endY="90.9159"
android:startX="48.7653"
android:startY="61.0927"
android:type="linear">
<item
android:color="#44000000"
android:offset="0.0" />
<item
android:color="#00000000"
android:offset="1.0" />
</gradient>
</aapt:attr>
</path>
<path
android:fillColor="#FFFFFF"
android:fillType="nonZero"
android:pathData="M66.94,46.02L66.94,46.02C72.44,50.07 76,56.61 76,64L32,64C32,56.61 35.56,50.11 40.98,46.06L36.18,41.19C35.45,40.45 35.45,39.3 36.18,38.56C36.91,37.81 38.05,37.81 38.78,38.56L44.25,44.05C47.18,42.57 50.48,41.71 54,41.71C57.48,41.71 60.78,42.57 63.68,44.05L69.11,38.56C69.84,37.81 70.98,37.81 71.71,38.56C72.44,39.3 72.44,40.45 71.71,41.19L66.94,46.02ZM62.94,56.92C64.08,56.92 65,56.01 65,54.88C65,53.76 64.08,52.85 62.94,52.85C61.8,52.85 60.88,53.76 60.88,54.88C60.88,56.01 61.8,56.92 62.94,56.92ZM45.06,56.92C46.2,56.92 47.13,56.01 47.13,54.88C47.13,53.76 46.2,52.85 45.06,52.85C43.92,52.85 43,53.76 43,54.88C43,56.01 43.92,56.92 45.06,56.92Z"
android:strokeColor="#00000000"
android:strokeWidth="1" />
</vector>

View File

@@ -1,170 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<vector xmlns:android="http://schemas.android.com/apk/res/android"
android:width="108dp"
android:height="108dp"
android:viewportHeight="108"
android:viewportWidth="108">
<path
android:fillColor="#26A69A"
android:pathData="M0,0h108v108h-108z" />
<path
android:fillColor="#00000000"
android:pathData="M9,0L9,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,0L19,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M29,0L29,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M39,0L39,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M49,0L49,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M59,0L59,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M69,0L69,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M79,0L79,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M89,0L89,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M99,0L99,108"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,9L108,9"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,19L108,19"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,29L108,29"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,39L108,39"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,49L108,49"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,59L108,59"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,69L108,69"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,79L108,79"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,89L108,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M0,99L108,99"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,29L89,29"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,39L89,39"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,49L89,49"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,59L89,59"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,69L89,69"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M19,79L89,79"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M29,19L29,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M39,19L39,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M49,19L49,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M59,19L59,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M69,19L69,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
<path
android:fillColor="#00000000"
android:pathData="M79,19L79,89"
android:strokeColor="#33FFFFFF"
android:strokeWidth="0.8" />
</vector>

View File

@@ -1,179 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context="com.example.android.sequence.MainActivity">
<Button
android:id="@+id/compute_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="8dp"
android:layout_marginBottom="52dp"
android:text="@string/compute"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/result_text"
tools:text="@string/compute" />
<Button
android:id="@+id/reset_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginBottom="52dp"
android:text="@string/reset"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.464"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintVertical_bias="0.174"
tools:text="@string/reset" />
<EditText
android:id="@+id/initial_value_input"
android:layout_width="161dp"
android:layout_height="wrap_content"
android:layout_marginTop="264dp"
android:layout_marginEnd="92dp"
android:ems="10"
android:inputType="numberDecimal"
android:textAlignment="center"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<EditText
android:id="@+id/steps_input"
android:layout_width="161dp"
android:layout_height="wrap_content"
android:layout_marginTop="316dp"
android:layout_marginEnd="88dp"
android:ems="10"
android:inputType="number"
android:textAlignment="center"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<TextView
android:id="@+id/steps_label"
android:layout_width="wrap_content"
android:layout_height="32dp"
android:layout_marginStart="8dp"
android:layout_marginTop="248dp"
android:text="@string/steps"
android:textAppearance="@android:style/TextAppearance"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="@+id/ratio_input_label"
app:layout_constraintHorizontal_bias="1.0"
app:layout_constraintStart_toStartOf="@+id/ratio_input_label"
app:layout_constraintTop_toBottomOf="@+id/ratio_input_label"
tools:text="@string/steps" />
<TextView
android:id="@+id/initial_value_label"
android:layout_width="wrap_content"
android:layout_height="32dp"
android:layout_marginStart="8dp"
android:layout_marginTop="196dp"
android:text="@string/initial_value"
android:textAppearance="@android:style/TextAppearance"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="@+id/ratio_input_label"
app:layout_constraintHorizontal_bias="0.966"
app:layout_constraintStart_toStartOf="@+id/ratio_input_label"
app:layout_constraintTop_toBottomOf="@+id/ratio_input_label"
tools:text="@string/initial_value" />
<EditText
android:id="@+id/ratio_input"
android:layout_width="161dp"
android:layout_height="wrap_content"
android:layout_marginStart="8dp"
android:layout_marginTop="40dp"
android:ems="10"
android:inputType="numberDecimal"
android:textAlignment="center"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="@+id/initial_value_input"
app:layout_constraintStart_toStartOf="@+id/initial_value_input"
app:layout_constraintTop_toTopOf="parent" />
<TextView
android:id="@+id/ratio_input_label"
android:layout_width="wrap_content"
android:layout_height="32dp"
android:layout_marginStart="84dp"
android:layout_marginTop="52dp"
android:text="@string/ratio"
android:textAppearance="@android:style/TextAppearance"
android:textSize="18sp"
android:visibility="visible"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
tools:text="@string/ratio" />
<TextView
android:id="@+id/ratio_text"
android:layout_width="161dp"
android:layout_height="32dp"
android:layout_marginStart="18dp"
android:layout_marginTop="136dp"
android:layout_marginEnd="8dp"
android:text="@string/none"
android:textAlignment="center"
android:textAllCaps="false"
android:textAppearance="@android:style/TextAppearance"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="@+id/ratio_input"
app:layout_constraintHorizontal_bias="0.652"
app:layout_constraintStart_toEndOf="@+id/result_label"
app:layout_constraintStart_toStartOf="@+id/ratio_input"
app:layout_constraintTop_toBottomOf="@+id/ratio_input"
tools:text="@string/none" />
<TextView
android:id="@+id/ratio_text_label"
android:layout_width="wrap_content"
android:layout_height="32dp"
android:layout_marginStart="8dp"
android:layout_marginTop="140dp"
android:text="@string/ratio"
android:textAppearance="@android:style/TextAppearance"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="@+id/ratio_input_label"
app:layout_constraintHorizontal_bias="1.0"
app:layout_constraintStart_toStartOf="@+id/ratio_input_label"
app:layout_constraintTop_toBottomOf="@+id/ratio_input_label"
tools:text="@string/ratio" />
<TextView
android:id="@+id/result_text"
android:layout_width="161dp"
android:layout_height="32dp"
android:layout_marginStart="18dp"
android:layout_marginTop="292dp"
android:layout_marginEnd="8dp"
android:text="@string/none"
android:textAlignment="center"
android:textAllCaps="false"
android:textAppearance="@android:style/TextAppearance"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="@+id/ratio_input"
app:layout_constraintHorizontal_bias="0.957"
app:layout_constraintStart_toEndOf="@+id/result_label"
app:layout_constraintStart_toStartOf="@+id/ratio_input"
app:layout_constraintTop_toBottomOf="@+id/ratio_input"
tools:text="@string/none" />
<TextView
android:id="@+id/result_label"
android:layout_width="wrap_content"
android:layout_height="32dp"
android:layout_marginStart="8dp"
android:layout_marginTop="296dp"
android:text="@string/result"
android:textAppearance="@android:style/TextAppearance"
android:textSize="18sp"
app:layout_constraintEnd_toEndOf="@+id/ratio_input_label"
app:layout_constraintHorizontal_bias="0.941"
app:layout_constraintStart_toStartOf="@+id/ratio_input_label"
app:layout_constraintTop_toBottomOf="@+id/ratio_input_label"
tools:text="@string/result" />
</androidx.constraintlayout.widget.ConstraintLayout>

View File

@@ -1,5 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android">
<background android:drawable="@drawable/ic_launcher_background" />
<foreground android:drawable="@drawable/ic_launcher_foreground" />
</adaptive-icon>

View File

@@ -1,5 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android">
<background android:drawable="@drawable/ic_launcher_background" />
<foreground android:drawable="@drawable/ic_launcher_foreground" />
</adaptive-icon>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -1,6 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="colorPrimary">#3F51B5</color>
<color name="colorPrimaryDark">#303F9F</color>
<color name="colorAccent">#FF4081</color>
</resources>

View File

@@ -1,10 +0,0 @@
<resources>
<string name="app_name">NN API Demo: sequence</string>
<string name="compute">Compute</string>
<string name="reset">Reset</string>
<string name="result">Result: </string>
<string name="initial_value">Initial Value: </string>
<string name="ratio">Ratio: </string>
<string name="steps">Steps: </string>
<string name="none">None</string>
</resources>

View File

@@ -1,8 +0,0 @@
<resources>
<!-- Base application theme. -->
<style name="AppTheme" parent="android:Theme.Material.Light.DarkActionBar">
<!-- Customize your theme here. -->
</style>
</resources>

View File

@@ -1,6 +1,8 @@
cmake_minimum_required(VERSION 3.22.1)
project(OrderfileDemo CXX)
include(AppLibrary)
# We have setup build variables that you can just comment or uncomment to use.
# Make sure to have only one build variable uncommented at a time.
# If you want to generate profiles and mapping file, make sure GENERATE_PROFILES is uncommented.
@@ -10,16 +12,16 @@ project(OrderfileDemo CXX)
set(GENERATE_PROFILES ON)
#set(USE_PROFILE "${CMAKE_SOURCE_DIR}/demo.orderfile")
add_library(orderfiledemo SHARED orderfile.cpp)
add_app_library(orderfiledemo SHARED orderfile.cpp)
target_link_libraries(orderfiledemo log)
if(GENERATE_PROFILES)
# Generating profiles requires any optimization flag aside from -O0.
# The mapping file will not generate and the profile instrumentation does not work without an optimization flag.
target_compile_options(orderfiledemo PRIVATE -forder-file-instrumentation -O1 -mllvm -orderfile-write-mapping=mapping.txt )
target_link_options(orderfiledemo PRIVATE -forder-file-instrumentation )
target_compile_options(orderfiledemo PRIVATE -forder-file-instrumentation -O1 -mllvm -orderfile-write-mapping=mapping.txt)
target_link_options(orderfiledemo PRIVATE -forder-file-instrumentation)
target_compile_definitions(orderfiledemo PRIVATE GENERATE_PROFILES)
elseif(USE_PROFILE)
target_compile_options(orderfiledemo PRIVATE -Wl,--symbol-ordering-file=${USE_PROFILE} -Wl,--no-warn-symbol-ordering )
target_link_options(orderfiledemo PRIVATE -Wl,--symbol-ordering-file=${USE_PROFILE} -Wl,--no-warn-symbol-ordering )
target_compile_options(orderfiledemo PRIVATE -Wl,--symbol-ordering-file=${USE_PROFILE} -Wl,--no-warn-symbol-ordering)
target_link_options(orderfiledemo PRIVATE -Wl,--symbol-ordering-file=${USE_PROFILE} -Wl,--no-warn-symbol-ordering)
endif()

View File

@@ -1,94 +1,15 @@
# Prefab Samples
# Sample removed
The samples in this directory demonstrate how to create and consume
[C/C++ dependencies] with the Android Gradle Plugin. Dependencies are packaged
within AARs using the [Prefab] package format. The Android Gradle Plugin
natively supports producing and consuming these, so you do not need to worry
about the details of the packaging format.
The samples that used to reside in this directory have been removed. The
behaviors demonstrated here (publication and consumption of AARs with native
APIs) are now used throughout the samples for code sharing, and were Android
Gradle Plugin features rather than NDK features.
The individual samples will explain in more detail, but the typical workflow for
a consumer of a native dependency is:
For an example of how to create an AAR that exposes a native library, see the
`base` module, or any other with `buildFeatures { prefabPublishing = true }`.
1. Enable the prefab build feature in your `build.gradle` file.
1. Add the dependency to the dependencies block of your `build.gradle` file.
1. Import the package in your `CMakeLists.txt` or `Android.mk` file.
1. Link the dependencies to your libraries.
For examples of how to consume native APIs from an AAR, see any of the other
samples which set `buildFeatures { prefab = true }`.
Linking the dependency to your library will automatically make the headers
available, link the required libraries, and include the dependencies you use in
your application or library.
To produce an AAR that exposes C/C++ APIs:
1. Enable the `prefabPublishing` build feature in your `build.gradle` file.
1. Use `android.prefab` in your `build.gradle` file to declare the names of the
libraries you wish to export and the headers that define their interface.
To test your packages, follow the steps for package consumers. Note that until
https://issuetracker.google.com/120166563 is fixed it is not possible to depend
on the native components of an AAR generated by another module of the same
project when using Android Studio. Instead, test in a separate project. You can
import the AAR into your test project either
[directly](https://developer.android.com/studio/projects/android-library#AddDependency)
or by publishing it to a Maven repository.
With that in mind, the samples here collectively demonstrate prefab usage:
- prefab-publishing shows how to create an AAR for distributing native libraries
- prefab-dependency shows how to import native dependencies from [GMaven]
- curl-ssl: for shows to use 2 very specific and important AARs (curl and ssl)
## Prefab Availability
Support for Prefab packages has been available Android Gradle Plugin since
version 4.0:
- Consuming native dependencies requires AGP 4.0+
- Generating AARs that export native dependencies requires AGP 4.1+
The AARs used by the samples here are hosted at [Google Maven], but you can host
your AARs anywhere accessible to gradle.
## Pre-requisites
- Android Gradle Plugin 4.0+
- The [Android NDK](https://developer.android.com/ndk/)
Please check for the individial sample's README.md for anything specific to that
sample.
## Support
If you've found an error in these samples, please
[file an issue](https://github.com/android/ndk-samples/issues/new).
Patches are encouraged, and may be submitted by
[forking this project](https://github.com/android/ndk-samples/fork) and
submitting a pull request through GitHub. Please see
[CONTRIBUTING.md](../CONTRIBUTING.md) for more details.
- [Stack Overflow](http://stackoverflow.com/questions/tagged/android-ndk)
- [Android Tools Feedbacks](http://tools.android.com/feedback)
## License
Copyright 2020 Google, Inc.
Licensed to the Apache Software Foundation (ASF) under one or more contributor
license agreements. See the NOTICE file distributed with this work for
additional information regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
[c/c++ dependencies]: https://developer.android.com/studio/build/native-dependencies?buildsystem=cmake&agpversion=4.0
[gmaven]: (https://maven.google.com/web/index.html?q=ndk#com.android.ndk.thirdparty)
[google maven]: https://maven.google.com/web/index.html#com.android.ndk.thirdparty
[prefab]: https://google.github.io/prefab
See https://developer.android.com/build/native-dependencies for the Android
Gradle Plugin's documentation of these features.

View File

@@ -1,64 +0,0 @@
# curl-ssl
This sample shows how to import [curl] and [OpenSSL] from Google Maven to
display a list of the most recent 10 reviews submitted to AOSP's code review
system.
One of the goals is demonstrate how to handle HTTPS certificates correctly as
explained by
[this Stack Overflow post](https://stackoverflow.com/a/30430033/632035): the
root certificates presented by Android since ICS are not in the format OpenSSL
expects, so we need to provide our own certificate file. We do this by
downloading curl's cacert.pem and storing that in our assets directory, as
described in
[this Stack Overflow post](https://stackoverflow.com/a/31521185/632035).
If you want to understand how to use C/C++ dependencies with AGP, refer to:
- [prefab-dependency] and [prefab-publishing] samples in the same directory as
this one
- [C/C++ dependencies] page for Android documentation for C/C++ dependencies
## Pre-requisites
- Android Gradle Plugin 4.0+
- The [Android NDK](https://developer.android.com/ndk/).
## Getting Started
The C++ code in this sample can be built with either CMake (the default for this
project) or ndk-build. To use ndk-build set the `ndkBuild` project property
either in your `local.properties` file or on the command line by passing the
`-PndkBuild` flag when building.
To build with [Android Studio](http://developer.android.com/sdk/index.html):
1. Open this project in Android Studio.
1. Click *Run/Run 'app'*. If you want to debug/trace code, due to
[the SSL lib's known issue with lldb](https://github.com/android/ndk-samples/issues/740),
make sure to apply the recommendations there for a smooth debugging
experience.
To build from the command line:
1. Navigate to this directory in your terminal.
1. Run `./gradlew installDebug` (or `gradlew.bat installDebug` on Windows).
## Screenshots
![screenshot](screenshot.png)
## Support
If you've found an error in these samples, please
[file an issue](https://github.com/android/ndk-samples/issues/new).
Patches are encouraged, and may be submitted by submitting a pull request
through GitHub. Please see [CONTRIBUTING.md](../../CONTRIBUTING.md) for more
details.
[c/c++ dependencies]: https://developer.android.com/studio/build/native-dependencies?buildsystem=cmake&agpversion=4.0
[curl]: https://curl.haxx.se/
[openssl]: https://www.openssl.org/
[prefab-dependency]: https://github.com/android/ndk-samples/blob/main/prefab/prefab-dependency
[prefab-publishing]: https://github.com/android/ndk-samples/blob/main/prefab/prefab-publishing

View File

@@ -1,68 +0,0 @@
/*
* Copyright (C) 2019 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
plugins {
id "ndksamples.android.application"
id "ndksamples.android.kotlin"
}
android {
namespace 'com.example.curlssl'
defaultConfig {
applicationId "com.example.curlssl"
versionCode 1
versionName "1.0"
externalNativeBuild {
if (!project.hasProperty("ndkBuild")) {
cmake {
arguments "-DANDROID_STL=c++_shared"
}
}
}
ndk {
// None of the ndkports libraries currently (August 2025) include
// riscv64 libraries.
abiFilters.remove("riscv64")
}
}
externalNativeBuild {
if (!project.hasProperty("ndkBuild")) {
cmake {
path "src/main/cpp/CMakeLists.txt"
}
} else {
ndkBuild {
path "src/main/cpp/Android.mk"
}
}
}
buildFeatures {
prefab = true
}
}
dependencies {
implementation libs.appcompat
implementation libs.androidx.constraintlayout
implementation libs.curl
implementation libs.jsoncpp
implementation libs.openssl
}

Some files were not shown because too many files have changed in this diff Show More