• No results found

Improving burn depth assessment for pediatric scalds by AI based on semantic segmentation of polarized light photography images

N/A
N/A
Protected

Academic year: 2021

Share "Improving burn depth assessment for pediatric scalds by AI based on semantic segmentation of polarized light photography images"

Copied!
8
0
0

Loading.... (view fulltext now)

Full text

(1)

Improving

burn

depth

assessment

for

pediatric

scalds

by

AI

based

on

semantic

segmentation

of

polarized

light

photography

images

Marco

Domenico

Cirillo

a,b,c,

*

,

Robin

Mirdell

d,e,f

,

Folke

Sjöberg

d,e,f

,

Tuan

D.

Pham

g,

**

a

DepartmentofBiomedicalEngineering,LinköpingUniversity,Linköping,Sweden

b

CentreforMedicalImageScienceandVisualization,LinköpingUniversity,Linköping,Sweden

cCenterforMedicalImageScienceandVisualization,LinköpingUniversity,Linköping,Sweden

dTheBurnCentre,LinköpingUniversityHospital,Linköping,Sweden

eDepartmentofPlasticSurgery,HandSurgery,andBurns,LinköpingUniversity,Linköping,Sweden

fDepartmentofClinicalandExperimentalMedicine,LinköpingUniversity,Linköping,Sweden

g

CenterforArtificialIntelligence,PrinceMohammadBinFahdUniversity,Khobar,SaudiArabia

a

b

s

t

r

a

c

t

Thispaperillustratestheefficacyofanartificialintelligence(AI)(aconvolutionalneural

network,basedontheU-Net),fortheburn-depthassessmentusingsemanticsegmentationof

polarizedhigh-performancelightcameraimagesofburnwounds.Theproposedmethodis

evaluatedforpaediatricscaldinjuriestodifferentiatefourburnwounddepths:superficial

partial-thickness(healingin07days),superficialtointermediatepartial-thickness(healing

in813days),intermediatetodeeppartial-thickness(healingin1420days),deep

partial-thickness(healingafter21days)andfull-thicknessburns,basedonobservedhealingtime.

Intotal100burnimageswereacquired.Seventeenimagescontainedall4burndepthsand

wereusedtotrainthenetwork.Leave-one-outcross-validationreportsweregeneratedand

anaccuracyanddicecoefficientaverageofalmost97%wasthenobtained.Afterthat,the

remaining83burn-woundimageswereevaluatedusingthedifferentnetworkduringthe

cross-validation,achievinganaccuracyanddicecoefficient,bothonaverage92%.

Thistechniqueoffersaninterestingnewautomatedalternativeforclinicaldecisionsupportto

assessandlocalizeburn-depthsin2Ddigitalimages.Furthertrainingandimprovementofthe

underlyingalgorithmbye.g.,moreimages,seemsfeasibleandthuspromisingforthefuture.

©2021TheAuthors.PublishedbyElsevierLtd.ThisisanopenaccessarticleundertheCCBY

license(http://creativecommons.org/licenses/by/4.0/).

a

r

t

i

c

l

e

i

n

f

o

Articlehistory: Availableonlinexxx Keywords: Artificialintelligence Deeplearning

Convolutionalneuralnetworks

U-Net

Semanticsegmentation

Paediatricburns

1.

Introduction

Burnwoundsoccurwhentheskincomesincontactwithfire,

hotwater,electricity,orchemicals.Dependingontemperature

and contact duration with the skin, different burn depths

develop.Burndepthmaybeclassifiedintoseparatelevels[1]:

superficial partial-thickness (I), superficial to intermediate

partial-thickness (II), intermediate to deep partial-thickness

(III), deep partial and full-thickness burns (IV). Importantly,

* Correspondingauthorat:DepartmentofBiomedicalEngineering,LinköpingUniversity,Linköping,Sweden.

** Correspondingauthor.

E-mailaddresses:marco.domenico.cirillo@liu.se(M.D.Cirillo),tpham@pmu.edu.sa(T.D. Pham).

https://doi.org/10.1016/j.burns.2021.01.011

0305-4179/©2021TheAuthors.PublishedbyElsevierLtd.ThisisanopenaccessarticleundertheCCBYlicense(http://creativecommons.

org/licenses/by/4.0/).ThisisanopenaccessarticleundertheCCBYlicense(http://creativecommons.org/licenses/by/4.0/).

Available

online

at

www.sciencedirect.com

ScienceDirect

(2)

burns of deep partial or full-thickness depth benefit from excision

andskingraftingtohealappropriately.Patientslessthan4years

oldwhogetaburnduetohotwater(Scald)representthe3040%

ofthepatientsarrivingataBurnCentre.Beinganagedefined

andahomogenousgroupfacingburnsmainlyonthetrunkand

armstheywerechosenforthisevaluation.

Burndepthsarecorrectlyclassifiedbyexpertclinicianswith

anaccuracyaround6476%andaround50%bynon-expert

clinicians[27].Today,onetoolthathasbeenusedsuccessfully

asadecisionsupportforcliniciansarebasedonlaserDoppler

[8,9]andonitsmostrecentdevelopment:laserspecklecontrast

imaging(LSCI)[1012].Suchinstrumentshavebeenadvocated

inordertoimproveburndepthsassessmentandtheyareused

occasionallybycliniciansasadecisionsupport device[13].

Thesetechniquesprovideperfusionimagesoftheinjuredskin.

Shortcomingsinclude that they require training and knowledge

tobefullyoperationalandmostimportantlyisthattheimage

generatingprocedureischallengingandthustimeconsuming.

Thishasledtothelimitedclinicaluseofthemethodology.From

anaccuracyperspective,thetechnique also requires at leasttwo

consecutivemeasurementstobeabletoclassifytheburndepth

withreliableaccuracy[10].

For thesereasons, an automatic, fast, objective,and accurate

methodissoughttoevaluatesuchtypesofinjuriesandwiththe

goaltohelpclinicians(decisionsupport),decideifapatientwill

bemefitfromsurgicaltreatmentoftheburnwoundornot.

1.1. AIbasedburndepthassessmentbysemanticimage segmentation

Artificial intelligence based on Convolutional neural networks for

semantic image segmentation as fully convolutional neural

networks[14],SegNet[15],U-Net[16],etc.becameveryattractive

modelsinmedicinebecausetheycombinelocalandglobalimage

information after which a pixel-wise based classification is

provided[17].Theonlydisadvantageisthatthesemodelsrequire

ademandinglearningandtrainingprocessatthebeginning(bya

large computer calculating capacity), but after that, they compute

separateimagesegmentationinafewseconds.Duringthelast

years,theU-Nethasbecomequitepopularinthemedicalfield,so

thatmanymodifiedU-Netswerecreatedandappliedinmedical

applications.Forexample:V-Net[18],tosegmenttheprostate;

DUnet[19],tosegmentretinal vessels;H-DenseUNet[20],for

segmentliverandtumoursinit;AttentionU-Net[21],tosegment

thepancreas;andNonew-Net[22](2ndplacewinnerinBraTS

2018challenge),tosegmentbraintumours.Inthispaperweused

amodifiedU-Netwithresidualstosegmentfourdifferent

burn-depths(superficialpartial-thickness(I),superficialto

intermedi-atepartial-thickness(II),intermediatetodeeppartial-thickness

(III),deeppartialandfull-thickness(IV))inimagesgeneratedbya

high-performancelightcamerawithpolarisationfilterswiththe

aimtoprovideautomatedandobjectiveimagestobeusedbythe

burnsurgeonfortheburn-woundassessmentsupport.

2.

Method

2.1. Patientpopulation

Consecutivelyarrivingchildren,intheagerange04years,at

the outpatient clinic at the Linköping Burn Centre were

included. Laser Dopplerand laserDopplerSpeckleimaging

datafromthiscohorthaspreviouslybeenpresentedinaseries

ofpublications [2,3,10,11,23,24].Inshort, thepatientswere

anesthetised rectallywithketamin[25]andthewoundbed

wasproperlycleanedpriortoimagecapture.Imagecapturing

was doneinaclimatecontrolledroomwithregularindoor

lightning (no windows). For this study, based on a

high-performancelightcamera,imagesweretakeninparalleltothe

onepresentedinthepreviouspublication[3].

2.2. Data

Onehundredburnwoundimageswereacquiredfrompatients

withageequalorless4yearsoldusingaTiVi700,whichisa

tissue viability imagingdevice(WheelsBridgeAB, Sweden).

TiVi700isahigh-performancedigitalcameraequippedwith

polarisationfiltersandflashlightsallarounditslenstoavoid

the reflectingartefact duetoroom lightand/orthecamera

flashandburnwoundfluid.

AnexampleofsuchdataisgiveninFig.1a,whichshowsa

burnwoundimagecapturedbytheTiVi700;whereasFig.1b

showsitsground-truthlabelledmanuallybyaburnclinician

expertoftheLinköpingUniversityHospitalBurnCentre.The

ground-truths,astheoneinFig.1b,weredefinedbasedonthe

wound'shealingtime:asuperficialpartial-thicknesswound

healed within7days,a superficialtointermediate

partial-thicknesshealedbetween813days;anintermediatetodeep

partial-thicknesshealedwithin1420days;andadeeppartial

or full-thickness, which did not heal within 20 days and

underwent surgery. Importantly, surgery was always done

after day 20, which gives ground-truth a high degree of

reliability as all children were observed until day 20 and

healingearlierthanthatwasrecordedbyoneclinician.These

earlierhealingeventsweredividedinto re-epithelialization

within7,14or21days,respectively.

Thetargetofthisprojectisachievingasegmentationresult,

asinFig.1b,fromaburnwoundimage,asinFig.1a,using

artificialintelligence,morespecificallyaconvolutionalneural

network,similartotheU-NetproposedbyRonnebergeretal.

[16],but withdifferent depth,lossfunction,optimizer, and

applyingtheresidualstheoryonit.

Since each burn wound image has a really complex

backgroundrichofobjects(i.e.healthyskin,blanket,medical

tools,nurses’gloves,monitors,etc.),thisisremovedinorderto

lettheCNNonlyfocusesonsegmentingtheregionofinterest,

theburnwound,anddistinguishbetweenthefourdifferent

burn-depths.

Convolutional neural networks minimise the dice loss

[18,26]toachieveagoodsegmentationresultratherthanthe

moregenerallyusedcross-entropyloss,becausetheformer

doesnotcountinthetruenegatives(thebackground),which

normallyhavethemajornumberofpixelsintheimage.The

higherthedicecoefficientisthehighertheaccuracyis,butthe

contraryisnottrue.Thedicelossismathematicallydefinedas

DL¼1D¼12 PC c¼1wcPNngcnpcn PC c¼1wcPNngcnþpcn ; (1)

whereDLstandsforthediceloss,Dfordicecoefficient,Cfor

numberofclasses,Nfornumberofpixels,wcfortheweight

2

burns xxx (2021) xxx xxx

(3)

assignedtoclassc,gcnandpcnforthen-thpixelwhichbelongsto

theground-truthandtothenetwork’spredictiononc-thclass

respectively.Whenwcisnotavectorofones,Eq.(1)represents

thegeneralizeddice-loss.Otherwise,wcisdefinedas

wc¼

Ns

CNc;

(2)

whereNs isthe numberofpixelsinthe imageand Ncthe

numberofpixelsthatbelongtotheclassc.Inthiswayallthe

classesarebalanced,becausethenetworkweighseachclass

accordingtotherespectiveweight.If,forexample,thereare

manypixelsbelongingtooneclassoverthewholedataset,its

weightwillbelow;vice-versa,iftherearefewpixelsbelonging

tooneclass,itsweightwillbehighaccordingtoEq.(2).So,the

networkwillpaymoreattentiontolearnaclassrepresentedby

fewpixelsratherthanaclassbymanypixels.

Sincetheburnimagedatabasehasimagesrepresentingall

orsomeofthe four burndepths,thesegmentation stepis

appliedonlytotheimagesthatrepresentburnswithallthe

burn depths (there are 17 in total)in order to enable the

convolutionalneuralnetworktolearnfromahomogeneous

dataset.Asimplifieddiagramofthesemanticsegmentationis

describedinFig.2.

Theaccuracy(Acc),F1coefficient,intersectionoverunion

(IoU), precision (P) and sensitivity (S) are calculated for

measuring the performance of the segmentation obtained

from the second convolutional neural network using the

groundtruth.Thesemetricsarecalculatedas:

Acc¼ TPþTPTNþþTNFPþFN (3) F1¼ 2TP 2TPþFPþFN (4) IoU¼ TP TPþFPþFN (5) P¼ TPTPþFP (6) S¼ TPTPþFN (7)

whereTP,TN,FPandFNrepresenttruepositive,truenegative,

falsepositive,andfalsenegative,respectively.Thesevalues

arecalculated onthe binaryclassimages,so,forexample

there is a TP when both the ground-truth and model’s

prediction segmentationhavevalue 1forthesamepixels.

Fig.3illustratesthespaceofthedefinedmetricsforanimage

segmentation.

ThealgorithmwaswritteninPython3.6,usingtheKeras

library[27]functionsonasuper-computerwith512GBRAM,2

Intel(R)Xeon(R)CPUE5-2697v4@2.30GHz,18coreseach,and

3NvidiaGTX10808GB.

Fig.1–Originalburnwoundimage(a)anditsburndepthareasground-truth(b)drewbyaclinicianspecialist:whitefordeep

partialandfull-thicknessdepth;silverforintermediatetodeeppartial-thickness;greyforsuperficialtointermediate

partial-thickness;darkgreyforsuperficialpartial-thickness;and,intheend,blackforuninjuredskinandthebackground.

(4)

ThisstudywasapprovedbytheRegionalEthicsCommittee

inLinköpingandconductedincompliancewiththe“Ethical

principlesformedicalresearchinvolvinghumansubjects”of

theHelsinkiDeclaration.

2.3. Trainingofthealgorithm

Beforestartingthe training process, sincetherewere only

17images availablewithallthe burndepthspresent,data

augmentationisstronglyneeded.Inordertoevaluatethe

convolutional neural network, leave-one-out

cross-valida-tioniscomputed,so16imagesareusedforthetrainingand

validation set and just 1 for the testing set. On these

16 original images, rotations of0, 90, 180 and 270 are

applied and for each of these rotated images other new

40 images are created using the elastic deformation

technique[28].Intheend,3936imagesareaugmentedfrom

16 original ones and then split 9010%into training and

validation set respectively for the second convolutional

neuralnetworktrainingprocess.

FromTable1,itispossibletonoticethatthebestnetwork,

theone withthehighestdice coefficient,isthe number3.

Minimizing the dice-loss, accuracy and dice coefficient

convergeatalmostthesamevalueand,afterleave-one-out

cross-validation,the systemhasaverageaccuracyanddice

coefficientof96.81%.Moreover,theaverageweightsforeach

classtobalancethetrainingprocess,calculatedusingEq.(2)

afterimageaugmentationoneachleave-one-outfold,are:

w0ðBackgroundÞ

w1ðSuperficialIÞ

w2ðSuperficialPartialthicknessIIÞ

w3ðDeepPartialThicknessIIIÞ

w4ðFullThicknessIVÞ

;

wherew0 isthe weight which belongs to the background,

whereastheotherstotheburn-depthclassesI,II,IIIandIV

respectively(seeEq.(2)).Aswanted,thebackgroundweight

hasasmallvalueand,ontheotherhand,thefulland

deep-thicknessdepthweighthasahighvalue,whereasclassIIand

IIIhavesimilarweights,soprobablytheclassificationbetween

themmightbecomplicated.

Fig.4,herebelow,showsfourdifferentsemantic

segmen-tationresults,usingthenetworks3,10,12and16ofTable1.on

theirrespectivetestimages.Eachimageillustratestheburn

wound without the background, its ground-truth and the

convolutional neural network’s prediction. Moreover, it

reportstheaccuracy,F1coefficient,intersectionoverunion,

precisionandsensitivitymetricsextractedfromthe

ground-truthandtheconvolutionalneuralnetwork’spredictionfor

eachclass(seeEqs.(3)(7)).

ItispossibletoconcludethatFig.4illustratesfourgood

semanticsegmentationresultsbecausethemetricsreported

havereallyhighvalues.InTable2arereportedtheaverageof

thesamemetricsoverallthe17burn-woundimagesforeach

class,anditispossibletonoticethatclassIIandclassIIIarethe

oneswithlowermetricsvalues.Thiswasexpectedsinceit

happenedalsoin[3]andalsobecauseburnexpertclinicians

havemoredifficultiestodistinguishthoseclasses.

Neverthe-less, they have high accuracy and suitable F1 coefficient,

precision and sensitivity to help the burn clinicians and

surgeonstoachieveabetterdiagnosis.Therearenoproblems

todistinguishclassIandclassIVsincetheirmetricsvalues

have F1coefficient of93.46%and 86.77%,intersection over

unionof88.68%and78.53%,precisionof93.35%and83.96%,

sensitivityof93.86%and92.80%respectively.

Afterhavingtrainedthealgorithmonthese17imagesthe

remaining83wereexamined.

3.

Results

Sincewedidnothaveaccesstootherthan 83burn-wound

images which unfortunately did not containall the

burn-depth,the17convolutionalneuralnetworkscreatedduring

the leave-one-out cross-validation needed to be used to

evaluate thefinalsetofimages(n=83). Ifaconvolutional

Table1–Accuracyanddicecoefficientvaluesobtained afterleave-one-outcross-validation.

Network Accuracy Dicecoefficient

1 0.8814 0.8812 2 0.8042 0.8040 3 0.9977 0.9977 4 0.9968 0.9967 5 0.9972 0.9972 6 0.9930 0.9930 7 0.9568 0.9567 8 0.9937 0.9936 9 0.9906 0.9906 10 0.9911 0.9911 11 0.9976 0.9976 12 0.9852 0.9852 13 0.9865 0.9864 14 0.9867 0.9867 15 0.9840 0.9480 16 0.9898 0.9898 17 0.9619 0.9617 Average 0.96810.0498 0.96810.0498

Fig.3–Illustrationoftruepositive(TP),truenegative(TN),false

positive(FP)andfalsenegative(FN)betweenabinary

ground-truthanditsprediction.

4

burns xxx (2021) xxx xxx

(5)

Fig.4–Semanticsegmentationresultsusingthenetwork3,10,12and16ontherelativeimages.Eachsegmentationresults

showtheburnwoundimage,theground-truthandtheconvolutionalneuralnetwork’sprediction.Moreover,accuracy(Acc),

(6)

neuralnetworklearnthowtodistinguishfourburn-depthsin

animage,itshouldbeabletodothatalsoinoneimagethat

doesnotpresentallofthem.Accuracyanddicecoefficientare

reportedinTable3foreachnetwork.FromTable3itispossible

to notice that all the networksreport accuraciesand dice

coefficientsabovethe90%,withthe4-ththebestonewith

approximately93%forbothofthem.

4.

Discussion

In this paper we used amodified U-Net with residuals to

segmentfourdifferentburn-depths(superficial

partial-thick-ness (I), superficial to intermediate partial-thickness (II),

intermediatetodeeppartial-thickness(III) anddeeppartial

and full-thickness (IV)) in images generated by a

high-performancelightcamerawithpolarisationfilters withthe

aimtotrainthenetworktopredictburndepth.Afteracquiring

100burnimages,seventeenimageswereusedfortraining.

Leave-one-outcross-validationreportsweregeneratedandan

accuracyanddicecoefficientaverageofalmost97%wasthen

obtained.Afterthat, the remaining83 burn-woundimages

wereevaluatedusingthedifferentnetworkduringthe

cross-validation,achievinganaccuracyanddicecoefficient,bothon

average92%.TheF1score, ordicescorecoefficient,isthat

metrictypicallyusedtoevaluateimagesegmentationresults

becauseitdoesnotconsiderinitsequation(seeEqs.(1)and(4))

the true negatives, whereas it focuses more on the true

positivesandwherethepredictioninthisclinicalsettingmost

oftenfails(falsenegativeandfalsepositives).Inotherwords,it

measureshowgoodapredictedsegmentationbythenetwork

overlaps with the “true” segmentation provided by the

clinicianspecialistinthisstudymadeatday20afterburn.

4.1. Relatedworks

Burn woundsassessmentmadebycomputervision

techni-quesareyetnotsopopularbuttherearesomescientistswho

tried to investigate this field. Pinero et al. [6] identified

16 texture features for the burn image segmentation and

classification. These features were then inspected by the

sequential forward and backward selection methods via

fuzzy-ARTMAP neural network. This method achieved an

averageaccuracyofabout83%using250images,4949pixels,

dividedin5burnappearanceclasses:blisters,brightred,

pink-white, yellow-beige, and brown. Wantanajittikul et al. [29]

used the Hilbert transform and texture analysisto extract

feature vectorsand thenappliedasupportvectormachine

(SVM)classifiertoclassifyburndepth.Thebestaccuracyresult

fora4-foldcross-validation was90%using5imagesasthe

validation set and 34 images as the training set,and 75%

correctclassificationonablindtestwasthenobtained.Acha

etal.[30]appliedmultidimensionalscaling(MDS)analysisand

k-nearest neighbour classifier for burn-depth assessment.

Using 20 images as a training set and 74 for testing, 66%

accuracywasobtainedforclassifyingburnwoundsintothree

depths,and84%accuracywasobtainedforthosethatneeded

ordidnotneedgrafts.Serranoetal.[7]usedastrictselectionof

texturefeaturesofburnwoundsfortheMDSanalysisandSVM

andobtained80%accuracyinclassifyingthosethatneeded

graftsandthosethatdidnot.Chauhanetal.[31]usedAIto

classifybodypartsfrom109burn-woundimages(30portray

burnwoundsontheface,35onthehand,23onthebackand21

on theinner forearm)withsize350450 300400pixels,

achievingoverallclassificationaccuracyof91%and93%using

a dependent and an independent convolutional neural

networkResNet-50respectively.Weourselves[3],alsotried

AI,similarlyfortheburn-depthclassification.Wecollected676

samplesofsize224224pixelsfrom23burn-woundimages

(almost100samplesforeachclass:thefourburn-depthsplus

thenormalhealthyskinandthebackground)andachievedan

average,minimum,andmaximumaccuracyof82,72,and88%

respectivelyusingtheResNet-101after10-fold

cross-valida-tion.Moreover,theaverageaccuracy,sensitivity,and

speci-ficitywereextractedforthefourburn-depths:91,74,and94%,

respectively.

5.

Study

limitations

Constructingatrainingdataset,largevolumesofstudyimages

areneeded.Giventhefrequencyofscalds,thecollectionof

very large image databases for training purposes are not

feasibleandthereforethedatasetusedinthisstudymaybe

Table2–Averageofaccuracy(Acc),F1coefficient(F1),

intersectionoverunion(IoU),precision(P)andsensitivity (S)overallthe17burnwoundimagesforeachburn-depth afterleave-one-outcross-validation.

Class Acc F1 IoU P S

I 0.9925 0.9346 0.8868 0.9335 0.9389

II 0.9867 0.7890 0.6907 0.8423 0.7800

III 0.9763 0.7287 0.6177 0.7501 0.7464

IV 0.9806 0.8677 0.7853 0.8396 0.9280

Table3–Accuracyanddicecoefficientvaluesforthe remaining83burnwoundimageswhichdonotshowall theburndepths,butsomeofthem.

Network Accuracy Dicecoefficient

1 0.9202 0.9202 2 0.9160 0.9156 3 0.9173 0.9173 4 0.9306 0.9305 5 0.9218 0.9218 6 0.9070 0.9070 7 0.9257 0.9257 8 0.9079 0.9070 9 0.9143 0.9142 10 0.9191 0.9191 11 0.9207 0.9207 12 0.9207 0.9207 13 0.9251 0.9251 14 0.9147 0.9147 15 0.9239 0.9239 16 0.9218 0.9218 17 0.9152 0.9150 Average 0.91890.0059 1.91880.0060

6

burns xxx (2021) xxx xxx

(7)

claimedtoosmall.Thisalbeitthefactthatalmosttwoyears'

collectionofpatients weremade. To improvethis point, a

specificimageoptimizationtechniquewasused(theelastic

deformationtechnique [28]). Bythis measure the16 initial

trainingimageswereartificiallyexpandedto3936imagesand

thusimprovingthepredictionmetrics.Havingmoreimages

forthetrainingsetisimportantforthefurtherimprovementof

thetechnique.

Anotherstudylimitationisofcoursewhatisclaimed“the

final”healingresult,andespecially determiningthe dayof

totalre-epithelializationusedtotrainthepredictionmethod.

Inthisstudyweawaitedthehealingsituation atday20 to

reducetheriskofasubjectiveeffectontheoutcomepresented.

However,thisneedstobeaddressedfurtherincomingstudies.

6.

Conclusion

Inthispaper,wewantedtoextendtheambitionbeyondour

previouspublication[3],addingthelocalclassificationtothe

globalone.Asshownintheprevioussection,AIisapowerful

tool that can be used to for the burn-depths assessment,

achievingaglobaldicecoefficientof97%afterleave-one-out

cross-validation,andtheaverageoftheF1coefficientsoverall

the17testimagesof93%,79%,73%and87%forsuperficial

partial-thickness, superficial to intermediate

partial-thick-ness,intermediatetodeeppartial-thickness,anddeeppartial

and full-thickness burns respectively. These values are

suitableforabetterburndiagnosissincetheexpertclinicians

onburnsassessaburnwoundwith75%accuracycomparedto

the92%presentedinthispaper.Importantlyitthenneedstobe

stressedthatthepresentpaperisbasedonlightphotography

imagesratherthanlaserDopplerbasedimages.Nevertheless,

theconvolutionalneuralnetworkperformanceanditsmetrics

maysurelyincreasewiththeavailabilityoflargerburnimage

databases.Thisobstaclemightbeovertakenwiththeuseof

Generative Adversarial Nets (GANs) [3234] for the image

augmentationonthetrainingimages.Suchfuture

improve-mentsappearespeciallyinterestinggiventheaccuracyand

practicalsimplicityofthemethodpresented.

Declarations

of

conflicts

of

interest

None.

REFERENCES

[1]HettiaratchyS,PapiniR.ABCofburns:initialmanagementofa majorburn:II—assessmentandresuscitation.BMJ:BrMedJ 2004;329:101.

[2]CirilloMD,MirdellR,SjöbergF,PhamTD.Tensor decompositionforcolourimagesegmentationofburn wounds.SciRep2019;9(1):3291.

[3]CirilloMD,MirdellR,SjöbergF,PhamTD.Time-independent predictionofburndepthusingdeepconvolutionalneural networks.JBurnCareRes2019;40(6):85763,doi:http://dx.doi. org/10.1093/jbcr/irz103.

[4]JeschkeMG.Burncareandtreatment:apracticalguide. Springer;2013.

[5]JohnsonRM,RichardR.Partial-thicknessburns:identification andmanagement.AdvSkinWoundCare2003;16(4):17887.

[6]PineroBA,SerranoC,AchaJI,RoaLM.Segmentationand classificationofburnimagesbycolorandtextureinformation. JBiomedOpt2005;10(3):034014.

[7]SerranoC,Boloix-TortosaR,Gómez-CíaT,AchaB.Features identificationforautomaticburnclassification.Burns2015;41 (8):188390.

[8]WearnC,LeeKC,HardwickeJ,AllouniA,BamfordA, NightingaleP,etal.Prospectivecomparativeevaluation studyofLaserDopplerImagingandthermalimagingin theassessmentofburndepth.Burns2018;44(1):12433.

[9]ShinJY,YiHS.DiagnosticaccuracyoflaserDopplerimagingin burndepthassessment:systematicreviewandmeta-analysis. Burns2016;42(7):136976.

[10]MirdellR,FarneboS,SjöbergF,TesselaarE.Accuracyoflaser specklecontrastimagingintheassessmentofpediatricscald wounds.Burns2018;44(1):908.

[11]MirdellR,IredahlF,SjöbergF,FarneboS,TesselaarE. Microvascularbloodflowinscaldsinchildrenanditsrelation todurationofwoundhealing:astudyusinglaserspeckle contrastimaging.Burns2016;42(3):64854.

[12]LindahlF,TesselaarE,SjöbergF.Assessingpaediatricscald injuriesusinglaserspecklecontrastimaging.Burns2013;39 (4):6626.

[13]JaspersME,vanHaasterechtL,vanZuijlenPP,MokkinkLB.A systematicreviewonthequalityofmeasurementtechniques fortheassessmentofburnwounddepthorhealingpotential. Burns2019;45(2):26181.

[14]LongJ,ShelhamerE,DarrellT.Fullyconvolutionalnetworks forsematicsegmentation.ConferenceonComputerVision andPatternRecognitionProceedings2015.

[15]BadrinarayananV,KendallA,CipollaR.SegNet:adeep convolutionalencoder-decoderarchitectureforimage segmentation.IEEETransPatternAnalMachIntell 2017;39:248195.

[16]RonnebergerO,FischerP,BroxT.U-net:convolutional networksforbiomedicalimagesegmentation.International ConferenceonMedicalImageComputingand Computer-AssistedIntervention2015.

[17]TopolEJ.High-performancemedicine:theconvergenceof humanandartificialintelligence.NatMed2019;25(1):44.

[18]MilletariF,NavabN,AhmadiS-A.V-net:fullyconvolutional neuralnetworksforvolumetricmedicalimage

segmentation.2016FourthInternationalConferenceon3D Vision(3DV)2016.

[19]JinQ,MengZ,PhamTD,ChenQ,WeiL,SuR.DUNet:a deformablenetworkforretinalvesselsegmentation. KnowledgeBasedSyst2019;178:14962.

[20]LiX,ChenH,QiXaDQ,FuC-W,HengP-A.H-DenseUNet: hybriddenselyconnectedUNetforliverandtumor segmentationfromCTvolumes.IEEETransMedImaging 2018;37(12):266374.

[21]OktayO,SchlemperJ,FolgocLL,LeeM,HeinrichM,MisawaK, etal.AttentionU-Net:learningwheretolookforthepancreas. arXiv2018preprintarXiv:1804.03999.

[22]IsenseeF,KickingerederPaWW,BendszusM,Maier-HeinKH. NoNew-Net.InternationalMICCAIBrainlesionWorkshop. Springer;2018.p.23444.

[23]MirdellR,FarneboS,SjöbergF,TesselaarE.Interobserver reliabilityoflaserspecklecontrastimagingintheassessment ofburns.Burns2019;45(6):132535.

[24]ElmasryM,MirdellR,TesselaarE,FarneboS,SjöbergF, SteinvallI.Laserspecklecontrastimaginginchildren withscalds:itsinfluenceontimingofintervention, durationofhealingandcare,andcosts.Burns2019;45 (4):798804.

GrossmannB,NilssonA,SjöbergF,NilssonL.Rectal ketamineduringpaediatricburnwounddressing

(8)

[25]procedures:arandomiseddose-findingstudy.Burns2019;45 (5):10818.

[26]SudreCH,LiW,VercauterenT,OurselinS,CardosoMJ. Generaliseddiceoverlapasadeeplearninglossfunctionfor highlyunbalancedsegmentations.Deeplearninginmedical imageanalysisandmultimodallearningforclinicaldecision support.Springer;2017.p.2408.

[27]CholletF.Keras. [Online].Available:..https://keras.io.

[28]SimardPY,SteinkrausD,PlattJC.Bestpracticesfor convolutionalneuralnetworksappliedtovisualdocument analysis.7thInternationalConferenceonDocumentAnalysis andRecognition(ICDAR)2003.

[29]WantanajittikulK,AuephanwiriyakulS,Theera-UmponN, KoanantakoolT.Automaticsegmentationanddegree identificationinburncolorimages.The4th2011Biomedical EngineeringInternationalConference2012.

[30]AchaB,SerranoC,FondónI,Gómez-CíaT.Burndepth analysisusingmultidimensionalscalingappliedto psychophysicalexperimentdata.IEEETransMedImaging 2013;32(6):111120.

[31]ChauhanJ,GoswamiR,GoyalP.Usingdeeplearningtoclassify burntbodypartsimagesforbetterburnsdiagnosis.Sipaim— Miccaibiomedicalworkshop.Springer;2018.p.2532.

[32]GoodfellowI,Pouget-AbadieJ,MirzaM,XuB,Warde-FarleyD, OzairS,etal.Generativeadversarialnets.Advancesinneural informationprocessingsystems..p.267280.

[33]YiX,WaliaE,BabynP.Generativeadversarialnetworkin medicalimaging:areview.MedImageAnal2019;101552.

[34]Frid-AdarM,DiamantI,KlangE,AmitaiM,GoldbergerJ, GreenspanH.GAN-basedsyntheticmedicalimage

augmentationforincreasedCNNperformanceinliverlesion classification.Neurocomputing2018;321:32131.

8

burns xxx (2021) xxx xxx

References

Related documents

The present work is the first critical edition of the Glossa ordinaria on the Book of Lamentations, and consists of the forewords, or prothemata, and the first book (of five) of

I samband med denna definition utgår kommunikationsteoretikern Fiske (1990) från en rad intressanta förutsättningar. Bland annat förutsätter han att kommunikation intar en

beslutsprocess i sig och därför kunna placeras in i steg tre av analysmodellen. Studien har dock ingen ambition att analysera de politiska strategier aktörer använt sig av. Det som

One main, determining factor in deciding the major goals for Swedish metrology in the next 5 to 10 years is the emergence of a major European metrology research program, the 400

Continuous inspection shall ensure that certified products continue to fulfil the requirements in these certification rules. It shall consist of the manufacturer's FPC, as

These results suggest that the size of the injury ceases to affect survival after treatment has been completed and that, when patients survive a burn, their

Här kan de kritiska momenten kopplas in i form av att ansvaret, när det kommer till situationer som inte går enligt plan och därav uppfattas som kritiska, i vissa fall bör tas

We highlight that knowledge has a role as: the motor of transition in Transition Management literature, a consultant supporting transition in Transformational Climate