• No results found

Native Android (Kotlin) - Update NewTaskSheet

In document Native development VS React Native (Page 39-44)

5.2 Progression

5.2.11 Native Android (Kotlin) - Update NewTaskSheet

class NewTaskSheet(var taskItem: TaskItem):

BottomSheetDialogFragment() {

...

if (taskItem != null) {

binding.taskTitle.text = "Edit Task"

val editable = Editable.Factory.getInstance()

binding.name.text = editable.newEditable(taskItem!!.name) binding.desc.text = editable.newEditable(taskItem!!.desc)

/* if(taskItem!!.dueTime != null){ … }*/

}else {

binding.taskTitle.text = "New Task"

}

Figure 39 update code in NewTaskSheet.kt

As shown in Figure 39, the code for updating the name and description of the TaskItem is added, but the unnecessary code is deleted at the end. At the same time, because the name and description of TaskItem can be updated here, the relevant code in MainActivity.kt is deleted.

17 Native Android sixth commit: https://github.com/a19xinhu/NariveAmdroid2/commit/ee5f550

36 private fun saveAction()

{

val name = binding.name.text.toString() val desc = binding.desc.text.toString() if(taskItem == null)

{

val newTask = TaskItem(name,desc,null,null) taskViewModel.addTaskItem(newTask)

}else {

taskViewModel.updateTaskItem(taskItem!!.id, name, desc, null) }

}

Figure 40 update save button code in NewTaskSheet.kt

Also because the name and desc of the TaskItem can be updated, the relevant codes are changed in the saveAction program as shown in Figure 40, so that the data can be transferred and the changes can be saved.

5.2.12 Native Android (Kotlin) - Add TaskItem Adapter and Views Holder18 Then we need a RecyclerView to show the content.

<androidx.recyclerview.widget.RecyclerView android:layout_width="match_parent"

android:layout_height="match_parent"

android:id="@+id/todoListRecyclerView"

android:backgroundTint="@color/…"

/>

Figure 41 add RecyclerView in activity_main.xml

As shown in Figure 41, the LinearLayout in activity_main.xml has been changed to RecyclerView, and a new xml file is required for this.

18 Native Android seventh commit: https://github.com/a19xinhu/NariveAmdroid2/commit/5d653e7

37

<androidx.cardview.widget.CardView ...

xmlns:app="http://schemas.android.com/apk/res-auto">

<LinearLayout

android:orientation="horizontal"

android:layout_width="match_parent"

android:layout_height="match_parent">

<ImageButton

android:layout_width="wrap_content"

android:layout_height="wrap_content"

android:backgroundTint="@android:color/transparent"

/>

Figure 42 code in task_item_cell.xml

Created a new task_item_cell.xml, the specific code is shown in Figure 42.

class TaskItemAdapter (

private val taskItem: List<TaskItem>

):RecyclerView.Adapter<TaskItemViewHolder>() {

override fun onCreateViewHolder( parent: ViewGroup, viewType: Int):

TaskItemViewHolder {

val from = LayoutInflater.from(parent.context) val binding = TaskItemCellBinding.inflate( from,

parent, false) return TaskItemViewHolder(parent.context,binding) }

override fun onBindViewHolder(holder: TaskItemViewHolder, position: Int)

{

holder.bindTaskItem(taskItem[position]) }

override fun getItemCount(): Int = taskItem.size }

Figure 43 code in TaskItemAdapter.kt

38 class TaskItemViewHolder

(

private val context: Context,

private val binding: TaskItemCellBinding ): RecyclerView.ViewHolder(binding.root) {

}

Figure 44 code in TaskViewHolder.kt

RecyclerView needs Adapter and ViewHolder, so create these two files, the specific code is shown in Figure 43, 44.

...

setRecyclerView() }

private fun setRecyclerView() {

val mainActivity = this

taskViewModel.taskItems.observe(this) {

binding.todoListRecyclerView.apply {

layoutManager=LinearLayoutManager(applicationContext) adapter = TaskItemAdapter(it)

} } }

Figure 45 setRecyclerView code in MainActivity.kt

Finally, set RecyclerView in MainActivity.kt, the specific code is shown in Figure 45.

5.2.13 Native Android (Kotlin) - Delete code about text, add image function19 The last thing is to modify the code to make the app more in line with this experiment.

19 Native Android eighth commit: https://github.com/a19xinhu/NariveAmdroid2/commit/1da95e6

39

Figure 46 app preview

The subsequent commit is to delete the code related to the text, and add the program related to the picture at the same time, so that the application only adds pictures.

Figure 47 app preview after click new task button

40

Figure 48 app preview after click add task 50 times

5.3 Pilot Study

In this chapter, a pilot study was carried out. The task of the pilot study is to see if the measurements made at work are correct and feasible. The purpose of this pilot study is to investigate the image rendering speed of the application between different development methods.

First, open the developer mode of the Android tablet, open the USB debugging mode, and connect the Mac computer to the Android tablet via USB. After packaging the code to generate an apk file, install the two apk files on the Android tablet, open the Profile HWUI rendering in the developer options, open the application you want to measure and click the button to generate a picture. Then use the following command in the terminal of the Mac computer:

adb shell dumpsys gfxinfo <packagename>

Then the terminal will display detailed time-consuming data as shown in figure 49.

41

Figure 49 part of raw data

Draw: Indicates the time taken by the OnDraw() method in the part of creating the display list in Java.

Prepare: preparation time

Process: Indicates the time it takes for the rendering engine to execute the display list, the more views, the longer the time

Execute: Indicates the actual time it takes to send a frame of data to the screen for typesetting and displaying.

This experiment is measured after adding pictures 500 times. Since Android’s built-in performance analysis tool only measures the last 2 seconds (60fps) of the application each time, so this pilot study measured 120 times for each of the two applications. And because this experiment wants to measure the image rendering performance, only the data in the Process column is used. These 120 measurement points are displayed in milliseconds in the line graph in figure 50 and in the bar graph in figure 51.

Figure 50

42 Figure 51

In order to ensure the validity of the measurement, all measurements in this pilot study were performed on the same Android tablet, and no internet connection was required. The hardware and software used to perform the experiments are listed in Table 1 below.

Table 1 Specs

Model Galaxy Tab A8 SM-X200

System Android 13/T/API 33

Ram 2.42GB

CPU 8 Core 64-bit UNISOC T618

5.3.1 Analysis of Pilot Study

In this pilot study, the image rendering time of the two applications has a significant difference.

In the line graph of Figure 50, the fluctuation of the React Native application is obvious, and the rendering time of the native application does not fluctuate much. This means that React Native applications are intermittent when rendering, unlike native applications that are more stable. In the bar chart in Figure 51, the React Native app averaged 4.94 ms and the Native app 3.3 ms. Looking at the standard deviation of app rendering times, the rendering performance of native apps is better. The reason for the overlap of standard deviations is that the content and functions of the two applications are basically the same, but the impact on the data caused by different experimental methods and execution codes cannot be ruled out.

43

There are two points that can be improved in subsequent measurements: one is to increase the number of pictures rendered at the same time, and the other is to use more complex pictures. Observe the impact on application rendering performance separately.

Regarding the application program of the IOS platform, since the IPA file packaging of the IOS platform requires a developer account, the developer account needs to be purchased.

Therefore, no measurement was carried out in this pilot study. In the subsequent measurement, may try to find a new measurement method or purchase a developer account.

44

6 Evaluation

6.1 Presentation of examination

When measuring the pilot study, the data obtained is not the rendering speed of the image that this experiment hopes, but the rendering speed of the entire application. This is because it is difficult to measure the rendering speed of a single image, and measuring the rendering speed of an application can also reflect the difference in rendering performance caused by different development methods to a certain extent. So this experiment was changed to measure the rendering speed of the application.

In the pilot study, two guesses were raised that could affect rendering performance, so 3 test cases were performed. The first test case is basically the same as the pilot study, but the number of measurements has been increased from 120 to 1080 to obtain more accurate data for analysis and summary. The test renders different pictures and then slides the screen for 2- 3 seconds to obtain 120 test data. The second test case is to test the rendering performance by changing the code so that the application increases the number of pictures rendered at the same time. This is to observe which development method can process multiple images at the same time with better performance. The third test case is to increase the image size and resolution to test the impact on application rendering performance. All test cases and descriptions are listed in Table 2.

Table 2 Test cases

Test ID Test Name Description

Test case 1 a normal image app App can add a 917 byte, resolution 89*21 image Test case 2 three normal image app App can add 3 917 byte, resolution 89*21 image Test case 3 a big image app App can add a 3597 byte,

resolution 178*42 image

6.2 Analysis

6.2.1 Test cases 1 a normal image app

45 Figure 52

Figure 53

Table 3 Test case 1 data

Mean STD CI(95%) P-value

Native 3.26618518518519 0.292763957661175 0.727265988 ReactN 3.77733333333333 1.74314660144886 4.33021621

5.193E-21 In Test Case 1, the application rendering time of applications obtained by different development methods on the Android platform was measured. Each of the two applications measured 1080 times. It can be seen from the line graph in Figure 52 that the data collected

0 2 4 6 8 10 12 14 16 18 20

1 36 71 106 141 176 211 246 281 316 351 386 421 456 491 526 561 596 631 666 701 736 771 806 841 876 911 946 981 1016 1051

time(ms)

Raw Data Line Graph

Native ReactN

0 1 2 3 4 5 6

time(ms)

Native ReactN

Raw Data Mean with STD

46

by the React Native application is relatively unstable, with multiple spikes. From the bar chart in Figure 53, it can be seen that the average rendering speed of the two applications is similar, but the React Native application is not very stable. Due to multiple spikes, the CI value of React Native in Table 3 is even greater than the average value. It can be seen from this that the data this time is not very accurate in estimating React Native. At the same time, this also makes the P value very small, which can be said to be infinitely close to 0, which also means that the application rendering performance of the two apps is different.

6.2.2 Test cases 2 three normal image app

Figure 54

Figure 55

0 2 4 6 8 10 12 14 16 18 20

1 36 71 106 141 176 211 246 281 316 351 386 421 456 491 526 561 596 631 666 701 736 771 806 841 876 911 946 981 1016 1051

time(ms)

3 Times Raw Data Line Graph

Native ReactN

0 1 2 3 4 5 6

time(ms)

Native ReactN

3 Times Raw Data Mean with STD

47

Table 4 Test case 2 data

Mean STD CI(95%) P-value

Native 4.43309259259259 0.614175208690588 0.977289812 ReactN 4.41809259259259 1.21775628571572 1.937721996

0.71780

In test case 2, the measurement process is similar to test case 1, except that the number of images rendered at the same time has changed to 3. Judging from the line graph in Figure 54, increasing the number of images rendered at the same time leads to greater fluctuations in data. From the bar chart in Figure 55, the average rendering time of the two applications is similar, but the data of React Native is more scattered. Since the data of the two apps have multiple spikes, the data in Table 4 shows that the average values of the two apps are similar, but from the perspective of CI and STD, the Native application is more accurate. But the P- value reached 0.7178, which indicates that there is no difference in the rendering performance of the two applications.

6.2.3 Test cases 3 a big image app

Figure 56

0 2 4 6 8 10 12 14 16 18 20

1 35 69 103 137 171 205 239 273 307 341 375 409 443 477 511 545 579 613 647 681 715 749 783 817 851 885 919 953 987 1021 1055

time(ms)

Big Image Raw Data Line Graph

Native ReactN

48 Figure 57

Table 5 Test case 3 data

Mean STD CI(95%) P-value

Native 3.45738888888889 0.672411066203109 1.670361687 ReactN 3.25275 0.881249288315017 2.189144591

1.53E-09

In test case 3, the measurement process is similar to test case 1, except that the rendered image size and resolution are doubled. From the line diagram in Figure 56, the change of the picture makes the data of the two applications overlap to a certain extent. From the bar graph in Figure 57, the average rendering time of the Native application is higher than that of the ReactNative application, but the data distribution of React Native is relatively scattered. Since there are only a few spikes, the STDs in Table 4 become smaller, but from the point of view of CI, the data accuracy of the Native application is still higher than that of the ReactNative application.

The P value from the ANOVA is also very small, indicating a difference between the rendering performance of the two applications.

6.3 Conclusions

This experiment is based on test case 1, and it is used to analyze the impact of the change of independent variables in test cases 2 and 3 on the measurement data.

The first is test case 1. From the line graph and bar graph, the rendering performance of the Native application is better than that of the ReactNative application. The STD value of the ReactNative application is about six times that of the Native application. This also means that the rendering performance of the Native application is more stable.

0 1 2 3 4 5 6

time(ms)

Native ReactN

Big Image Raw Data Mean with STD

49

In test case 2, two applications render 3 pictures at the same time, which increases the rendering time of both applications, but unexpectedly the average values of the two sets of data are very close. While the gap between the STD values in Comparative Test Case 1 was reduced from six times to two times, the STD value of the Native application increased, and the STD value of the ReactNative application decreased.

In test case 3, both applications rendered images with twice the size and resolution. Compared with test case 1, the average value of the Native application increased, while the average value of the ReactNative application decreased. Moreover, the STDs of the two applications are almost the same, the number of Native applications has increased, and the number of ReactNative applications has decreased.

Since the applications are developed in different ways, it is normal for the rendering performance of the application to be different, but the P value in Test Case 2 reached 0.7178, which means that there is no difference in the rendering performance of the two applications.

This may be because the respective data has multiple spikes.

In general, the performance of Native applications has been stable, but as the application's rendering performance pressure increases, the average rendering time of ReactNative applications becomes less and less. Just judging from the data, the stability is still not as good as the Native application.

50

7 Concluding discussion

7.1 Summary

With the development of smart phones, the payment of mobile applications has become a part of e-commerce, and how to occupy more markets, the performance of applications is a key.

Rendering performance is one of them, and it is very important to choose the appropriate app development method for this.

In this experiment, try to use different development methods to develop the same app as possible. One uses native development, and the other uses ReactNative for cross-platform development. The application can add pictures on the interface to observe which development method has better rendering performance. The application is changed from the todo program of the corresponding platform.

There are a total of 3 test cases in this experiment, in which the data of case 1 is the baseline, which is used to compare with the data of other cases, so as to understand the impact of different independent variables on the rendering performance of the application. Case 2 can understand the impact of the number of simultaneous renderings on application rendering performance, and Case 3 can understand the impact of image size on application rendering performance.

This experiment is based on the article of Novac, CM et al (2021), but the content of the experiment is different. The original assumption of this experiment was to measure the image rendering speed, but due to the limitation of the measurement method, it finally became only the application rendering speed of the Android platform, so the new assumption became:

H1: Apps developed natively on the Android platform have faster application rendering speeds than apps developed with JavaScript cross-platform frameworks.

According to the data of three test cases, the original development method is better in terms of stability and rendering time for applications that only render pictures. However, with the increase in the number of simultaneously rendered pictures and the use of large-resolution pictures, React Native's rendering performance can be similar to that of native development, and even a little bit better than native applications. But the stability is still higher than the native development method.

7.2 Discussion

Before conducting this experiment, I browsed and read various articles about development methods. Most of the articles only discuss a single development method or the comparison of two development languages on a certain platform or aspect. The article of Brito, H et al (2018) and the article of Brito, H et al (2019) both discuss a variety of different development methods to some extent, and compare and summarize them.

Most of the articles stated that the native development method has better performance, while cross-platform development can better save resources. Then there should be a point of balance between resource consumption and application performance, that is, the pursuit of cost- effectiveness of resource consumption, so there is this research experiment. However, this

51

experiment cannot find the best cost-effective resource consumption, and this experiment is just the beginning.

There should be no articles that can be referred to in this experiment. After all, as mentioned in the Method Description chapter, few organizations or companies use two development methods to obtain applications with the same functions. During the implementation of this experiment, in order to ensure the reliability of the results, the function of the application chose a single image to add. But as Brito, H et al (2019) said in the article, hybrid development is becoming the best development method, so the results of this study are not sure whether they will be used in the future.

There are many factors that affect the results and reliability of this experiment. The first is whether this experiment can be replicated, so that other people can conduct experiments in the future and confirm the experimental results. The code for the application for this experiment will be presented on GitHub for easy reproduction. After that comes the code of the application. After all, it is two different development languages. This is a factor that will definitely affect the measurement results. Otherwise, it is the method of measuring data. After all, you need to slide the screen, and the speed of sliding the screen may be different every time you measure. In the end, it should be the type of the picture that affects the effectiveness of the results. The pictures in this experiment are all png type pictures, and maybe other types of pictures will have other results.

Finally, this experiment only tested the rendering performance of the image, and it does not represent the overall rendering performance of the two development methods. Also, there may be new development methods and development tools in the future, which will cause this research to become unimportant, just like the comparison of current smart phones with mobile phones 20 years ago.

7.3 Ethics and society

From an ethical point of view, first of all this experiment must be reproducible. So the code for all the applications in this experiment can be found on GitHub. You can verify this experiment by downloading the corresponding version of the application. At the same time, the pictures in this application are also pictures without copyright protection. The test cases in this experiment are not comprehensive, because different Android devices may cause changes in the results.

This experiment may or may not be important. It is important because this research and future work can help developers choose an appropriate development method and reduce development costs and time. It is not important because there may be better development methods or development tools next year or a few years later, and technology iterations will cause this research to lose its validity.

In the Brito, H et al (2018) article referred to in this research, it is also mentioned that the hybrid solution is getting closer to the native solution in terms of execution speed and content rendering. This is why most applications today choose hybrid development. Hybrid development usually requires less development time to have decent performance.

In document Native development VS React Native (Page 39-44)