The immediate suggestion most people will have for running UI tests in the cloud is Firebase Test Lab. Firebase Test Lab is a great solution, but I need something that has the following capabilities:
The ability to install multiple arbitrary APK’s that are not the app under test
The ability to clone customized emulators
The ability to save and resume emulators across test runs
Why do I need these capabilities?
I am developing an Android application that runs on and integrates with a Point of Sale platform made by Clover. Specifically, I need to test my application in a test environment that mimics the Clover Station 2018 (pictured below)
Clover provides APK’s that allow you to re-create the operating environment of the Station 2018 on an emulator.
Using Genymotion Cloud we can re-create the Station 2018 environment on a tablet emulator image and then save that emulator image and use it or clone it later. This is something not available on Firebase Test Lab.
Below is a screenshot of a tablet emulator image configured as a Clover Station 2018 running in Genymotion Cloud SaaS.
Once we have configured the emulator, we can save the state of the emulator and give it a meaningful name.
The recipe UUID allows us to start a new instance of this emulator image using the recipe UUID. Genymotion provides a CLI tool that allows us to start an instance from the command line using the following command:
That will start an instance that we can connect to over ABD. We can also connect to the display through the Genymotion Cloud console.
The CLI tool also provides a command to connect ADB to the instance that was just started. After connecting over ADB, we can see that the Clover packages are installed and the emulator is a perfect clone of the image we previously prepared:
Using the orb we can make a job that runs UI tests on an instance of the emulator image we previously created. The job outlined below will build APKs to test, start a Genymotion emulator instance that is pre-configured as a Clover Station 2018 and then run a specific set of UI tests. Once the test are complete, the emulator instance will be stopped.
uiTest:
executor: android
steps:
- checkout
- run:
name: Chmod permissions #if permission for Gradlew Dependencies fail, use this.
command: |
sudo chmod +x gradlew
- restore_cache:
keys:
- gradle-{{ checksum "app/build.gradle" }}
- run:
name: Build APKs to test
command: |
./gradlew aCD aCDAT
- genymotion-saas/setup
- genymotion-saas/start-instance:
recipe_uuid: "c9246a83-4f38-4742-a11f-42b5b765dbdc"
- run:
name: Install APKs to test
command: |
./gradlew iCD iCDAT
- run: (! adb shell am instrument -w -r --no-window-animation -e debug false -e class 'com.fivestars.yoshi.feature.discount.view.DiscountActivityTest' com.fivestars.mpos.clover.test/androidx.test.runner.AndroidJUnitRunner | tee /dev/tty | grep -q FAILURES!!!)
- genymotion-saas/stop-instance
And with that, we have accomplished what we needed. We are now able to test our application in CI using an emulator that is pre-configured with the environment we need to test in.
Thank you for reading! Please let me know what you think!
Genymotion offers a free hour of Cloud SaaS.
There is an introduction video on how to get started below. Give it a try!
I was recently presented with the problem of testing a system that required two android emulators running side by side in coordination. Our QA team had set this up to work on a local windows machine, but we were exploring alternatives and desired moving our testing activities to the cloud.
Attempts
We started out exploring CircleCI. This is a familiar Cloud CI tool, but they do not support emulators. We also explored using their MacOS variants, which appear to be hosted on ESX, but I ran into GPU driver issues.
EC2 instances did not have any virtualization extensions exposed to the VM. This left us with only being able to use ARM emulation which is extremely slow and would probably take us an entire day or more to run our entire test suite.
After reading the docs on how to configure hardware acceleration for the Android Emulator, I realized that we needed to have access to virtualization extensions inside our VM/container. In virtualization speak, we needed “nested virtualization.” Both Google Cloud and Azure support nested virtualization, but I am more familiar with Google Cloud Platform so I started out on GCP.
The solution
A virtual machine hosted inside GCP that supports nested virtualization. We are going to setup our solution on Debian, but I have also had it working on Ubuntu. Let’s get started.
To begin, we will follow the instructions defined here
Create an instance from the image. It needs to be at least Haswell and we are using a machine-type that has a few extra cores which will help us run both emulators at the same time.
We can now start our VNC server using the following command. We will be prompted for a password. Enter in a password and remember it.
vncserver
Once we start the server it will create a few configuration files for us. We need to stop our VNC server nad update our config files so that we get a more usable display when we connect to our VNC server.
In this post we will explore an alternative to Apache Cordova that utilizes Kotlin Multiplatform. This example will show us how we can add Bluetooth functionality to a web app hosted in an Android WebView. This solution provides strong typing on both the JavaScript and Android platform and has the benefit of being able to share types across platforms too!
This solution is only implemented to replace the Android implementation, but could be extended to iOS or even Chrome Embedded Framework.
Throughout this post, we will be looking at the following repository and branch:
Cordova is a cross platform tool that allows you to host your application UI in a web container, a WebView on Android, and interact with native API’s using a JavaScript bridge. Apache Cordova is well proven, but developing plugins which extend the capabilities of Cordova apps is an error-prone activity and burdensome. While attending Android Dev Summit 2019, I asked the WebView engineers what they would suggest to bridge web and native and they suggested using WebMessage. I was skeptical at first, but I was inspired after seeing this answer on StackOverflow.
The Kotlin Multiplatform Alternative
To replace Cordova we need to be able to do the following:
1. Send messages to Native Android from JavaScript
2. Take action from message received from JavaScript
3. Respond with Success, Failure, and possibly include a data payload after receiving a message from JavaScript
1. Send Messages to Native Android from JavaScript
To send message to Native Android from JavaScript, we need to setup a bridge that allows two-way communication. This bridge was inspired by the previously mentioned SO answer.
The first step is to create a WebMessageChannel for our webview
private val webMessagePorts = WebViewCompat.createWebMessageChannel(webView)
This will create a message channel that will allow Android to talk to JavaScript and will allow JavaScript to talk to Android. We get two ports back from the call to this method and we need to send one of the ports back to JavaScript so that it knows how to talk to us. The following snippet sets a native Android callback that will receive the messages that come from JavaScript and then sends one of the ports of the message channel to the WebView.
val destPort = arrayOf(webMessagePorts[1])
// Set callback for port - This is what will receive the message that are sent from JavaScript
webMessagePorts[0].setWebMessageCallback(javascriptToNativeCallback!!)
// Post a message to the webview. The JavasScript code will have to capture this port so that it can talk to Native Android
WebViewCompat.postWebMessage(webView, WebMessageCompat(KEY_CAPTURE_PORT, destPort), Uri.EMPTY)
Now let’s look at how the JavaScript side handles the first incoming message from Native Android. This is where things get awesome. We will be writing “JavaScript” using Kotlin by utilizing Kotlin/JS. The configureChannel function below is called when our web app is first loaded and does the following:
Listens for incoming messages
When it gets a message, it checks to see if the data is the key we sent from native capturePort
If it is the capturePort message, then we assign the port to outputPort so we can send messages to native Android.
fun configureChannel() {
console.log("Configuring channel")
window.addEventListener("message", {
val event = it as MessageEvent
if (event.data != KEY_CAPTURE_PORT) {
console.log("event.data: ${event.data}")
inputPort.postMessage(event.data)
} else if (event.data == KEY_CAPTURE_PORT) {
console.log("assigning captured port")
outputPort = event.ports[0]
}
}, false)
inputPort.start()
outputPort.start()
}
We are writing our web app in Kotlin too. Below we call configure channel from inside our index.html file and show a glimpse of adding a button that allows us to connect to a specific device over Bluetooth.
BluetoothSerial.configureChannel();
val root = document.getElementById("root")
root?.append {
div {
button {
text("Connect to Device")
onClickFunction = {
BluetoothSerial.connect("18:21:95:5A:A3:80", {
console.log("Success function in connect");
}, {
console.log("Not success");
})
}
}
}
}
This is what it looks like rendered on a tablet. It’s a POC and is focused on function so please forgive the design language ;P
Now that we have our channel setup, we can send messages to Native Android from JavaScript. In the snippet above, we are initiating a Bluetooth connection to another device. Let’s look at how this is implemented using the power of Kotlin Multiplatform and sharing code between Android and JavaScript.
Our messages sent from JavaScript will contain the following model:
@Serializable
data class JavascriptMessage(
val action: Action,
val successCallback: Callback?,
val failureCallback: Callback?,
val data: Map<String, String>? = null
)
Our messages coming from JavaScript include an Action. An example is CONNECT. Which informs Android that we would like to initiate a Bluetooth connection. The message also includes a optional success and failure callbacks. These are invoked based on what happens on the native side. Finally, the message includes data property that allows us to specify data that is delivered with the Action. In a CONNECT scenario we include a KEY_MAC_ADDRESS property that specifies which device to connect to. These keys are defined in common code and shared across platforms.
Now we need to register some callbacks to handle the response from the Native side and then send the message over.
@JsName("connect")
fun connect(macAddress: String, onSuccess: () -> Unit, onFailure: () -> Unit) {
registerCallbacks(
Callback.CONNECT_SUCCESS,
Callback.CONNECT_FAILURE,
onSuccess,
onFailure,
true
)
val message =
JavascriptMessage(
Action.CONNECT, Callback.CONNECT_SUCCESS, Callback.CONNECT_FAILURE, mapOf(
KEY_MAC_ADDRESS to macAddress
)
)
messageHandler?.sendMessageToNative(
message
)
}
2. Take action from message received from JavaScript
When we set up our web message channel, we assigned a message handler for incoming messages. This same message handler is where we deserialize incoming JavaScriptMessage payloads and determine what to do based on the Action we find. We then grab the relevant data associated with the Action. In this instance we are looking for the MAC address to connect to.
val javascriptMessage = json.parse(JavascriptMessage.serializer(), message.data!!)
when (javascriptMessage.action) {
Action.CONNECT -> BluetoothSerial.connect(
javascriptMessage.data!![KEY_MAC_ADDRESS] as String,
javascriptMessage.successCallback,
javascriptMessage.failureCallback
)
Now we can call a native Android function that will assign the success and failure callbacks that were passed in from the JavascriptMessage and attempt connecting to the device!
fun connect(
macAddress: String,
successCallback: Callback?,
failureCallback: Callback?
) {
enableBluetoothIfNecessary()
val bluetoothAdapter = BluetoothAdapter.getDefaultAdapter()
val device = bluetoothAdapter.getRemoteDevice(macAddress)
if (device != null) {
BluetoothSerialService.connect(device)
successCallback?.run {
val nativeDataMessage =
NativeDataMessage(
this,
null
)
messageHandler?.sendMessage(nativeDataMessage)
}
} else {
sendFailure(failureCallback)
}
}
3. Respond with Success, Failure, and possibly include a data payload
If we are successful, we call the success Callback that was passed in and respond with a NativeDataMessage. A native data message includes the callback that we are replying to and any relevant data. Because this is a string, we can send back any type that can be serialized to a string.
@Serializable
class NativeDataMessage(val callback: Callback, val data: String?)
We now have a facility to initiate actions from JavaScript to Native and back and can build upon that. We have implemented these Action commands and are still iterating on this pattern and example.
The callback handling on both sides is still be iterated and improved to be more flexible. In another implementation we are using callbacks that receive parameters of custom types (think User or Product) instead of just String.
Another improvement would be to move all of the Android code that is related to the Bluetooth “plugin” into the Android source set of the SharedCode module. This would allow us to ship this solution as Kotlin multiplatform library 🙂
Video Walkthrough
Thank you!
Please let me know what you think and any suggestions you might have. Thank you!
Many apps have a need to display some sort of data in chart form. Recently, I was tasked with implementing a chart solution on Flutter and Flutter Web and wanted to walk you through the implementation.
To get started we need to add it as a dependency in our pubspec.yaml
dependencies:
charts_flutter: ^0.8.1
This package has a great example gallery that includes code snippets that show how to implement each example. In this post we are going to walk through a time series chart implementation.
The end result will be an app that displays a chart on Flutter Mobile and Web.
Once we have imported our dependency, we need to get the data from our data source and transform it into a model that the chart library knows how to use. To do this, we are going to start with the repository pattern. We will create a ReportRepository class and fetch our reports from it. The data we are using comes from a sensor device and the sensor devices provides us with vibration and trip count data.
class Report {
final String date;
final double vibration;
Report({this.date, this.vibration});
factory Report.fromJson(Map<String, dynamic> json) {
return Report(
date: json['date'] as String, vibration: json['vibration'] as double);
}
}
For our example, we are only interested in the vibration data so that is all we will parse into the report object.
Next, we will convert our “API” data into model objects that the charts_flutter can use. charts_flutter expects a series of data associated with a domain and they provide a factory for populating the object they can use. An excerpt can be seen below.
With this, we know we need to take our Report entries which include a date and a measurement for vibration and populate the Series factory that we can see above.
We will start by converting our report data yet again into something that we can use with the Series factory. Note: We could have done this when we got the data from the API, but sometimes it is better to keep the model for the rest of your domain pure and map it to the model that is currently being used when necessary. This helps create a separation and minimize impact should other areas of your application begin to rely on the report model. This is the model we will be mapping our entries to:
class VibrationData {
final DateTime time;
final double vibrationReading;
VibrationData(this.time, this.vibrationReading);
}
Next, we will utilize the Series factory that we talked about earlier. For this, we have created a static function that takes in a list of VibrationData and returns a Series type that we can use with charts_flutter
We are going to use dart inside an HtmlElementView which will return a widget that we can attach to our Flutter Web widget hierarchy. Without further ado, lets look at what that looks like in code.
Widget getMap() {
String htmlId = "7";
// ignore: undefined_prefixed_name
ui.platformViewRegistry.registerViewFactory(htmlId, (int viewId) {
final myLatlng = new LatLng(30.2669444, -97.7427778);
final mapOptions = new MapOptions()
..zoom = 8
..center = new LatLng(30.2669444, -97.7427778);
final elem = DivElement()
..id = htmlId
..style.width = "100%"
..style.height = "100%"
..style.border = 'none';
final map = new GMap(elem, mapOptions);
Marker(MarkerOptions()
..position = myLatlng
..map = map
..title = 'Hello World!');
return elem;
});
return HtmlElementView(viewType: htmlId);
}
In the code above we use Dart to interact with the Google Map library we added to our project. The library wraps the the javascript API with dart and with that we get Dart IDE support and type safety. Looking at the block, we can see that we specify a LatLng object and set that to our map center. Lastly, we create a single marker and add it to the map.
If you are looking to set your app as the default app for a UsbDevice and do not need to do it programmatically, then I recommend you follow the answers outlined on this SO question. I have a project on GitHub that provides a sample implementation of one of the answers. If you need to set this permission programmatically, then keep reading…
The problem
Initially, we implemented the manual solution outlined in the SO question above on a kiosk type device, but it was still a challenge for our operations team to remember to check the appropriate box before leaving the customer site. This would leave our kiosk device in a state where it was unable to talk to a USB card reader attached to it. We can modify our staging process to do this manual operation, but we also have 100’s of tablets in the field already deployed that need remediation. Our platform runs on a modified tablet that we have root access on. I started to dig through the framework code to see how things work and to see if there was something we could do programmatically.
Some background
When we interact with UsbDevice‘s on Android, we typically do that through the UsbManager service. This is an Android system service that has an implementation exposed to App developers through the Android SDK and a platform implementation that is implemented in the Android Platform source. The SDK implementation (UsbManager) talks to the UsbService using AIDL. When we get the UsbManager service, the methods we call on it call the the IUsbManager interface methods that are implemented by the UsbService on the platform side.
We can see the UsbService getting started by the SystemServer platform code here
if (mPackageManager.hasSystemFeature(PackageManager.FEATURE_USB_HOST)
|| mPackageManager.hasSystemFeature(
PackageManager.FEATURE_USB_ACCESSORY)) {
// Manage USB host and device support
Trace.traceBegin(Trace.TRACE_TAG_SYSTEM_SERVER, "StartUsbService");
mSystemServiceManager.startService(USB_SERVICE_CLASS);
Trace.traceEnd(Trace.TRACE_TAG_SYSTEM_SERVER);
}
Don’t lose patience, we are almost to something interesting. The UsbService class handles the management of UsbDevices, but it delegates settings management to a UsbSettingsManager class. The UsbSettingsManager class references a settings file
mSettingsFile = new AtomicFile(new File(
Environment.getUserSystemDirectory(user.getIdentifier()),
"usb_device_manager.xml"));
The settings file is read when the UsbSettingsManager is instantiated and the entries inside the file, called DeviceFilter entries, are added to an in memory map:
// Maps DeviceFilter to user preferred application package
private final HashMap<DeviceFilter, String> mDevicePreferenceMap =
new HashMap<DeviceFilter, String>();
This is the file that is created when a user connects a UsbDevice, the framework detects that an app has an intent filter to look for this device, and the user selects “Use by default for this USB device” for the particular device. In short, this is the file we need to programmatically create.
Sample dialog:
The usb_device_manager.xml file is located at /data/system/users/0/usb_device_manager.xml on our device.
Lastly, we need to know that this file is only read at USBService start and the in memory map is not exposed publicly. This means that any modifications to this file will require a reboot or service stop/start before they can be used.
The solution
Write the usb_device_manager.xml file with details for our app/device.
Restart the UsbService so that it will read the modified file and use the new values.
Writing usb_device_manager.xml
We can use some of the publicly available platform code to help us write the usb_device_manager.xml file. The framework uses an XML Serializer to write to the settings file
We can borrow this to write the file or we can use the serializer built into the SDK.
We also need to know the model we need to write. Earlier I mentioned the entries in the XML file were DeviceFilter entries. The DeviceFilter class is a private static inner class of UsbSettingsManager. We can “borrow” this code and the DeviceFilter constructor that takes a UsbDevice and use it in our solution.
DeviceFilter deviceFilter = new DeviceFilter(usbDevice);
writeSettingsFile(deviceFilter);
Now that we can recreate the same XML as the framework, we need to write to usb_device_manager.xml file. We can modify some of the platform code for our use.
And once we are done writing the file we need to copy it to the correct location and make sure its file permissions are correct. The following commands are executing programmatically in a shell environment on the kiosk device.
public static final String COMMAND_COPY_USB_FILE = "cp /sdcard/Android/data/com.whereisdarran.setusbdefault/files/usb_device_manager.xml /data/system/users/0/usb_device_manager.xml";
public static final String COMMAND_CHOWN_USB_FILE = "chown system:system /data/system/users/0/usb_device_manager.xml";
Lastly, we need to restart the UsbService.
We can do that by simply issuing a reboot command. Alternatively, in a root shell we could issue the stop command followed by a start command to bounce all of the system services.
I recently started a project to build LineageOS for the Lenovo TB-8504X/F. Part of the journey typically involves building a custom recovery. TWRP is now the defacto so I started there.
On a recent project, I had the opportunity to use the HTTPClient to integrate with a medical device. During the integration, I learned a few things that are not immediately apparent from the docs and wanted to share what I learned with you all.
Sometimes you need to use Charles Proxy to debug what is happening. The HTTPClient ignores proxies by default. We can modify the HTTPClient to use the proxy with the following configuration:
Basic Authentication is fairly straightforward to integrate, but Digest Access Authentication takes a little more work. Below you can see a sample Digest Access Authentication configuration:
The above snippet sets a function on the authenticate property of the client. When a resource the client is connecting to asks for authentication, we will supply HttpClientDigestCredentials to authenticate the request. I found this to work well with GET requests, but would fail with POST requests that included an attachment. This is because the client retries the request once credentials are provided, but it does not include attachments when replaying POST requests
Digest Authentication with POST
To handle the situation with Digest Authentication not working with POST, I created an authentication header that I would send with the initial POST request: