TCP Performance in Tactical Ad-Hoc Networks

Full text


  Karlstad  University  

Faculty  of  Economic  Sciences,  Communication  and  IT   Computer  Science  Department  


TCP  Performance  in  Tactical  Ad-­‐Hoc  Networks  

Velizar  G.  Dimitrov  


Bachelor  Thesis  


Thesis  Advisors:  

Prof.  Andreas  Kassler,  Karlstad  University  

Asst.  Prof.  Rossitza  Goleva,  Technical  University  of  Sofia    



1.  Introduction  

Communication  has  been  a  major  resource  to  the  human  race  by  being  able  to  transfer   information   from   one   to   another.   While   many   different   forms   exist   such   as   sign   language,   speaking,  and  body  language,  it  is  telecommunications  and  the  advance  of  the  digital  era  that   have   changed   the   world   throughout   the   last   century.   Public   switched   telephone   networks  (PSTN)  was  the  first  great  achievement  to  provide  individuals  with  the  opportunity   of  multidirectional  communication,  essentially  breaking  the  boundaries  of  distance.  Wireless   communications  developed  alongside  the  desire  for  richer  content,  leading  to  the  abundance   of  cellular  phones  and  Cable  TV  nowadays.  

In  early  1990s,  the  development  of  telecommunication  technology  lead  to  a  revolution,   as   Internet   started   offering   attractive   services   that   help   users   or   customers   to   obtain   information   stored   in   a   computer   in   any   part   of   the   world.   Communication   technology   has   reached  a  stage,  where  it  even  substitutes  social  interactions  and  in  the  past  decade  social   networking  has  become  a  multi-­‐billion  industry.  

The  Internet,  the  greatest  technological  phenomena  of  the  modern  era,  has  expanded   more  than  anyone  could  imagine  in  just  a  few  decades.  Access  to  the  global  network  is  now   available   through   numerous   ways   like   cell   phones,   PSTN   modems,   ISDN,   ADSL,   broadband,   satellite  and  so  on.  Integration  of  communication  technologies  has  led  to  the  state  where  a   particular   technology   is   just   a   means   of   accessing   a   unified   information   source.   The   proliferation   of   mobile   computing   and   communication   devices   is   the   driver   of   the   revolutionary  change  in  the  information  society.  

Moving   to   an   era   of   ubiquitous   communication   imposes   a   new   set   of   technical   challenges.   The   very   nature   of   ubiquitous   devices   makes   wireless   networking   the   easiest   solution.   It’s   only   natural   that   the   wireless   communications   should   be   experiencing   such   a   tremendous  development  over  the  past  decade.  The  mobile  user  of  today  can  use  his  cellular   phone  to  check  his  email  and  read  the  news;  travelers  with  portable  computers  are  surfing   the  internet  from  almost  any  public  location;  tourists  are  searching  online  for  attractions  and   checking   directions   through   GPS   maps;   researchers   are   collaborating   and   exchanging   data   through  the  global  network;  teleworkers  are  attending  online  meeting  and  conferences;  The   world  has  acquired  a  foreseeable  size.  

Mobile   computing   devices   are   getting   smaller,   cheaper,   more   convenient,   and   noticeable   more   powerful.   They   are   capable   of   running   applications   and   services   which   previously  were  typical  for  on-­‐desk  workstations.  It  may  be  a  pure  philosophical  question  why   do  users  demand  more  and  more  capabilities  from  their  mobile  devices,  but  the  conclusion  is   that  the  demand  is  the  fuel  for  the  explosive  progress  of  communication  technology.  

Most   existing   communication   technologies   rely   upon   a   pre-­‐built   infrastructure   to   function  and  deliver  they  services.  It  is  a  well  known  fact  that  the  deployment  and  support  of   a  commercial  network  infrastructure  is  very  time-­‐consuming  and  considerable  expensive.  A   problem   with   relying   on   an   infrastructure-­‐based   system   is   that   if   it   breaks   down,   the   communication   it   supports   also   breaks   down,   if   redundancy   and   backups   are   not   highly   prioritized.  A  prime  example  of  this  is  a  disaster  area.  Earthquakes,  fires  and  floods  can  render   an   infrastructure   of   any   scale   unusable.   For   rescue   operations   it   is   good   to   be   able   to  


communicate   to   coordinate   work   or   get   in   touch   with   survivors   in   damaged   buildings.  

Another  example  is  military  operations.  In  the  jungle  there  is  a  low  probability  of  finding  a   suitable  communication  infrastructure.  

An  alternative  way  to  deliver  services  has  emerged.  Mobile  Ad  Hoc  Networks  (MANETs)   are   complex   distributed   systems   that   consist   of   autonomous   mobile   nodes   that   can   move   freely  and  organize  dynamically.  MANETs  are  focused  on  having  the  mobile  nodes  seamlessly   connect   to   one   another   in   their   respective   transmission   ranges   through   automatic   configuration,  setting  up  a  dynamic  ad  hoc  network  that  is  both  flexible  and  powerful.  Every   node  in  the  network  functions  both  as  a  host  and  as  a  router,  exhibiting  also  a  capability  for   movement  which  makes  the  topology  temporary  and  the  network  conditions  dynamic.  The   routing  of  packets  relies  on  multi-­‐hop  principles.  

To   enable   communication   in   such   systems,   a   great   number   of   problems   needed   resolving.  Communication  theory  was  introduces  as  a  relatively  new  science,  whose  purpose   was  to  deal  with  challenges  of  unified  digital  communication.    TCP/IP  [1]  protocol  has  proven   itself   as   the   reliable   work   horse   of   the   Internet,   albeit   with   some   enhancements   along   the   way.  Therefore  the  naturally  obvious  solution  to  the  problems  of  MANET  communication  for   the  transport  layer  would  be  to  treat  it  like  any  other  Internet  system.  Deploy  TCP/IP  in  the   end  nodes,  construct  a  routing  protocol,  and  start  the  communication.  Another  motive  to  use   TCP  in  MANET  is  the  opportunity  of  connecting  to  the  Internet  -­‐  an  end-­‐to-­‐end  connection   can  be  maintained,  without  the  need  for  protocol-­‐converting  proxies  or  gateways.  

The   Transmission   Control   Protocol   (TCP)   was   designed   as   a   reliable   end-­‐to-­‐end   connection-­‐oriented   protocol   for   data   delivery   over   somewhat   unreliable   networks.  

Theoretically,   TCP   should   be   independent   of   the   underlying   network   technology   and   infrastructure.  TCP  shouldn’t  make  a  difference  whether  the  piggy-­‐backing  IP  is  running  over   wireless   or   wired   connections.   But   it   turns   out   that   TCP   was   designed   and   optimized   on   assumptions   that   are   specific   to   wired   networks.   Thus   ignoring   the   properties   and   peculiarities  of  wireless  networks  leads  to  a  dramatically  worsened  performance  [2].  

The  purpose  of  the  current  work  is  to  assess  and  analyze  the  problems  and  weaknesses   of   TCP   in   MANETs   as   well   as   investigate   various   approaches   and   techniques   that   try   to   overcome   TCP’s   shortcomings.   This   study   was   conducted   through   the   strong   collaboration   between   The   Technical   University   of   Sofia,   Bulgaria   and   Karlstad   University,   Sweden   in   the   person  of  Prof.  Andreas  Kassler  and  Jonas  Karlsson.  



2.  Challenges  in  MANET  environment  

Mobile  Ad  Hoc  networks  are  formed  dynamically  and  are  prone  to  frequent  topology   changes.   The   topology,   as   well   as   other   network   conditions,   can   change   rapidly   and   unpredictably.   Nodes   are   free   to   move   arbitrarily   and   may   or   may   not   organize   between   themselves.   The   MANET   is   infrastructure-­‐less,   self-­‐sustained   and   may   or   may   not   be   connected  to  an  outside  network.  Each  node  should  be  able  to  communicate  with  any  other   node  within  its  transmission  range.  For  communication  with  nodes  beyond  its  transmission   range,  the  intermediate  nodes  should  relay  the  message  hop  by  hop.  Routes  between  nodes   are  dynamic  and  should  generally  be  considered  as  multi-­‐hop.  

Although  ad  hoc  networks  offer  great  flexibility  and  convenience,  those  do  come  at  a   price.  Ad  hoc  networks  inherit  the  problems  of  traditional  radio/wireless  communication.    

• The  wireless  medium  and  its  properties  are  not  considered  absolute.  

• The  communication  range  and  boundaries  are  not  readily  observable.  

• Since  the  wireless  medium  is  shared,  the  signal  is  not  protected  from  outside  signals   and  the  signal  itself  is  not  constrained  between  the  communicating  node  pair.  

• The  wireless  medium  is  considerably  less  reliable  then  wired  media.  

• The  communication  channel  has  time-­‐varying  and  asymmetric  propagation  properties.  

The   presence   of   the   former   properties   of   the   wireless   medium   leads   to   the   manifestation  of  the  following  issues  concerning  the  communication  and  networking:  

• Lossy  channels  

Signal  attenuation  is  observed  due  to  the  decrease  of  the  intensity  of  EM  waves,  which   leads  to  low  signal-­‐to-­‐noise  ratio  (SNR)  with  increase  of  distance;  The  relative  velocity   between   sender   and   receiver   causes   Doppler   shift   in   the   frequency   of   the   arriving   signal,   worsening   the   probability   of   a   good   reception;   Reflection   of   EM   waves   from   surrounding  objects  causes  Multipath  fading  i.e.  the  signal  travels  over  multiple  paths   before  reaching  the  receiver  which  in  turn  causes  fluctuations  in  amplitude  and  phase,   which  results  in  destructive  interference,  effectively  lowering  the  SNR;  

• High  bit  error  rate  

Due   to   the   previously   specified   channel   properties,   the   radio   network   is   prone   to   transmission  errors.  This  leads  to  bit  errors  in  the  received  data,  which  often  causes  the   data  to  be  discarded.  This  may  be  interpreted  as  a  packet  loss  event.  

• MAC  contention  

MANETs   use   multi-­‐hop   relaying.   A   packet   reception   and   transmission   uses   the   same   radio  resources,  and  must  contend  for  medium  access  with  other  transmitters.  If  there   are   lots   of   transmissions,   this   leads   to   high   contention   which   can   be   problematic   performance  wise.  

• Hidden  terminal  problem  

A   typical   scenario   is   illustrated   on   Figure   1.   When   two   transmitters   (A,   C)   are   out   of   range   of   each   other   i.e.   they   are   "hidden"   to   one   another,   and   both   want   to   communicate  with  a  third  node  (B)  reachable  from  both  A  and  C.  Since  C  cannot  hear  


A’s   transmission,   it   assumes   it   is   OK   to   transmit,   which   causes   a   collision   at   B   which   receives  both  signals.  

Figure  1  –  Hidden  terminal  problem    

To  resolve  the  hidden  terminal  problem  [3]  have  introduces  a  carrier  sensing  scheme   coupled   with   a   two-­‐way   handshake   mechanism.   Specifically,   the   source   terminal   A   transmits  a  Request-­‐To-­‐Send  (RTS)  control  message  to  the  destination  terminal  B.  When   the  destination  terminal  B  receives  a  RTS  message,  it  replies  with  a  Clear-­‐To-­‐Send  (CTS)   control  message  indicating  its  readiness  to  receive  and  effectively  reserving  the  medium   for   a   predefined   time   frame   since   when   C   hears   the   CTS   message   it   knows   that   someone  else  is  transmitting  and  it  will  not  interfere.  

• Exposed  terminal  problem  

The  problem  is  depicted  in  Figure  2.  If  A-­‐B  and  C-­‐D  are  communicating  respectively,  and   B  and  C  are  within  range  of  each  other,  they  experience  the  exposed  terminal  problem.  

When  B  transmits  to  A,  C  cannot  transmit  to  D  because  it  assumes  its  transmission  will   be  jammed  by  B’s  transmission,  when  in  fact  it  would  not.  

Figure  2  –  Exposed  terminal  problem    

• Mobility  and  loss  induced  route  breaks  

As  a  result  of  frequent  topology  changes,  route  re-­‐computations  would  be  necessary  to   maintain   overcall   connectivity.   In   a   MANET,   nodes   are   responsible   for   forwarding/routing   packets   to   the   destination.   Considering   the   mobility   aspect,   it   is   obvious  that  routes  must  be  recomputed  when  nodes  move  out  of  transmission  range.  

If  node  A  was  used  to  reach  host  B,  but  A  is  unreachable,  another  way  must  be  found  to   reach  B.  


• Network  partitioning  

Two  communicating  nodes  may  be  partitioned  i.e.  separated  with  no  route  in  between,   for  varying  amounts  of  time.  The  partitions  in  a  disjoint  network  are  often  referred  to  as  

"islands".  Like  with  route  re-­‐computations,  nodes  may  move  out  of  transmission  range,   and   if   there   are   few   forwarding   nodes   the   network   may   be   partitioned   until   a   new   forwarding   node   joins   the   "islands"   again.   During   partition   events,   no   packets   can   be   delivered  between  nodes  in  the  "island".  

• Forward/reverse  route  failures  

Depending  on  routing  protocol,  the  routing  may  not  be  symmetric  i.e.  traffic  from  A  to  B   may   not   use   the   same   route   from   B   to   A.   This   problem   will   be   noted   again   in   the   asymmetry-­‐type   problems.   Depending   on   the   transport   layer   requirements   on   acknowledgements,  this  may  be  a  problem.  A  route  failure  can  happen  on  the  reverse   path,  while  packets  are  still  being  delivered  on  the  forward  path.  

• Multipath  routing  

In   order   to   improve   network   reachability,   multiple   routes   may   be   maintained   by   the   routing  protocol.  This  is  done,  for  example,  in  TORA.  If  a  packet  takes  different  routes,  it   is  probable  that  the  routes  have  different  delay  and  that  packet  reordering  will  occur.  

• Bandwidth  asymmetry  

Depending  on  the  utilization  of  the  channel  and  quality  of  the  signal,  bandwidth  may   differ,   with   respect   to   the   data   rate,   in   the   forward   and   reverse   direction   e.g.   IEEE   802.11g  [4]  can  dynamically  negotiate  data  rates  from  6  to  54  Mbps.  

• Loss  rate  asymmetry  

This  type  of  asymmetry  occurs  when  the  forward  or  reverse  path  is  significantly  lossier.  

In   MANETs   this   is   due   to   the   fact   that   conditions   and   communication   channel   parameters  differ  from  place  to  place.  

• Route  asymmetry  

Route  asymmetry  implies  that  the  forward  and  reverse  traffic  flows  traverse  different   set   of   nodes.   This   produces   a   difference   in   parameters   of   the   forward   and   reverse   channels   like   the   throughput   and   delay,   due   to   the   different   hop   count,   and   has   an   overall  degrading  effect.  It  is  important  to  note  that  all  of  the  asymmetry-­‐type  problems   are   interconnected   and   the   presence   of   one   can   indicate   the   presence   of   any   of   the   other  ones.  Also  each  of  them  can  be  the  cause  or  result  of  any  of  the  others.  



3.  TCP's  congestion  control  algorithm  

TCP  [1]  is  a  connection-­‐oriented,  reliable,  end-­‐to-­‐end  transport  protocol  that  provides   efficient   flow   and   congestion   control,   which   guarantees   reliable,   ordered   data   packet   delivery.   For   ordering   it   utilizes   sequence   numbers,   which   are   also   used   for   reliability,   as   correctly   received   data   is   acknowledged   to   the   sender.   If   no   acknowledgement   (ACK)   is   received  for  a  packet  in  a  certain  time  frame,  the  packet  is  retransmitted.  To  use  the  network   resources   efficiently,   TCP   estimates   the   available   capacity   of   the   receiver   and   the   network,   and  transmits  as  much  as  the  lowest  of  these  allows.  The  receiver  indicates  how  much  data  it   can   accept   through   an   advertised   window   value   in   the   returning   acknowledgments.   The   congestion  window  is  flow  control  imposed  by  the  sender  to  determine  the  network  capacity,   while  the  advertised  window  is  flow  control  imposed  by  the  receiver  to  indicate  the  rate  at   which  it  can  process  the  received  data.  

3.1.  TCP  Tahoe  

TCP’s  congestion  control  [5,  6]  is  illustrated  of  Figure  3  and  works  as  following:  

TCP   probes   the   network   by   sending   more   and   more   data.   A   congestion   window   (cwnd)   is   limiting   the   total   number   of   unacknowledged   packets   that   may   be   in   transit   end-­‐to-­‐end.  

Starting  with  one  segment,  more  data  is  sent  as  acknowledgements  are  received  i.e.  for  every   acknowledgment  received  the  congestion  window  (cwnd)  is  increased  by  one  segment.  This   results  in  an  exponential  growth  of  the  congestion  window  (cwnd).  The  sender  can  send  into   the  network  the  minimum  of  its  congestion  window  and  the  receiver’s  advertised  window.  

This  algorithm  for  increasing  the  congestion  window  is  termed  the  Slow  Start  phase,  although  

"slow  start"  in  an  understatement.  The  reception  of  acknowledgements  indicates  that  there  is   additional  capacity  in  the  network.  

Figure  3  –  TCP‘s  congestion  control    


When  a  loss  occurs,  either  by  the  expiration  of  the  Retransmission  timer  (RTO)  or  by  the   reception   of   three   duplicate   acknowledgments   (DUPACK),   TCP   assumes   that   the   network’s   capacity  has  been  reached.  Once  a  presumed  congestion  is  detected,  the  current  value  of  the   congestion   window   (cwnd)   is   halved   and   recorded   in   ssthresh   (slow   start   threshold).   Then   cwnd  is  reset  to  one  segment  (segment  size  is  usually  536  or  512  bytes)  and  transmission  is   resumed  in  the  Slow  Start  phase.  If  the  value  of  cwnd  reaches  the  value  of  ssthresh  without   packet   loss   events   occurring,   TCP   now   knows   that   it   is   approaching   the   network’s   capacity   and  enters  the  Congestion  Avoidance  phase  or  in  other  words,  ssthresh  indicates  when  the   Slow  Start  phase  should  transition  to  the  Congestion  Avoidance  phase.  During  this  phase,  the   congestion   window   is   linearly   increased   versus   the   exponential   growth   of   the   Slow   Start   phase.  

The   linear   growth   in   the   Congestion   Avoidance   phase   couples   with   the   exponential   reduction  when  congestion  takes  place,  have  coined  TCP's  Congestion  Avoidance  algorithm  as   an   Additive-­‐Increase/Multiplicative-­‐Decrease   (AIMD)   algorithm.   It   generally   functions   by   probing  the  network  for  usable  bandwidth  and  linearly  increases  the  congestion  window  by  1   Maximum   Segment   Size   (MSS)   per   round-­‐trip   time   (RTT),   until   loss   occurs.   When   loss   is   detected,  the  policy  is  changed  to  multiplicative  decrease  which  cuts  the  congestion  window   in  half.  The  result  is  a  saw-­‐tooth  behavior  that  represents  the  probing  for  bandwidth.  

The  described  congestion  control  algorithm  so  far  is  the  so  called  Tahoe  variant.  During   the  course  of  history,  the  algorithm  has  undergone  some  modifications  and  optimizations.  

3.2.  TCP  Reno  

The   next   major   and   wide-­‐spread   variation   of   the   congestion   control   algorithm   is   the   Reno  variant.  The  difference  here  is  that  if  three  duplicate  acknowledgements  (DUPACKs)  are   received,   Reno   will   halve   the   congestion   window,   perform   a   Fast   Retransmit,   and   enter   a   phase   called   Fast   Recovery,   instead   of   start   from   Slow   Start,   as   Tahoe   would.   If   an   acknowledgement  (ACK)  times  out,  Slow  Start  is  used  as  it  is  with  Tahoe.  

Reno’s  Fast  Recovery  phase  dictates  that  after  Fast  Retransmit  sends  what  appears  to   be   the   missing   segment,   Congestion   Avoidance,   but   not   Slow   Start   is   performed.   It   is   an   improvement   that   allows   high   throughput   under   moderate   congestion,   especially   for   large   windows.   The   reason   for   not   performing   Slow   Start   in   this   case   is   that   the   receipt   of   the   duplicate  acknowledgements  (DUPACKs)  tells  TCP  more  than  just  a  packet  has  been  lost.  Since   the   receiver   can   only   generate   the   duplicate   acknowledgements   (DUPACKs)   when   another   segment  is  received,  that  segment  has  left  the  network  and  is  in  the  receiver's  buffer.  In  other   words,  there  is  still  data  flowing  between  the  two  ends  and  TCP  does  not  want  to  reduce  the   flow  roughly  by  going  into  Slow  Start.  The  Fast  Retransmit  and  Fast  Recovery  algorithms  are   usually  implemented  together.  

3.3.  TCP  Vegas  

TCP  Vegas  [7]  was  developed  at  the  University  of  Arizona  in  mid-­‐1990s.  It  proposes  a   modification   in   TCP’s   congestion   avoidance   algorithm   that   takes   into   account   packet   delay,   rather  than  packet  loss,  in  order  determine  the  transmission  rate  of  packets  into  the  network.  

TCP  Vegas  detects  congestion  based  on  the  increasing  of  Round-­‐Trip  Time  (RTT)  values  of  the  


packets  in  the  connection.  Thus  the  algorithm  depends  heavily  on  accurate  calculation  of  the   RTT  value.  If  the  RTT  is  "too  small"  then  the  utilization  of  the  network  is  considered  low  and   higher  throughput  should  be  achievable  while  if  it’s  "too  large"  the  connection  is  overrunning   the   network   capacity.   Since   Vegas   implements   linear   increase/decrease   mechanism   for   the   congestion   control,   its   fairness   is   questionable   and   is   subject   to   research.   An   interesting   scenario  is  the  situation  in  which  Vegas  and  Reno/New  Reno  share  the  same  communication   channel.  Vegas’  performance  is  noted  to  degrade  since  it  detects  congestion  before  it  actually   happens,  while  Reno/New  Reno  detect  congestion  when  packet  loss  has  already  occurred.  

3.4.  TCP  New  Reno  

The  New  Reno  [8]  modification  concerns  the  Fast  Recovery  algorithm  that  begins  when   three   duplicate   acknowledgments   are   received   and   ends   when   either   a   retransmission   timeout   occurs   or   an   acknowledgment   arrives   that   acknowledges   all   of   the   data   in   the   transmission  window’s  instance  before  the  Fast  Recovery  procedure  began.  

During  Fast  Recovery  the  Congestion  Avoidance  takes  place  instead  of  the  Slow  Start,  as   it   is   with   Reno,   but   for   every   duplicate   acknowledgment   that   arrives   to   the   sender,   a   new   unsent  packet  is  transmitted.  The  purpose  of  this  action  is  to  keep  the  transmit  window  full.  

The   arrival   of   duplicate   acknowledgment   indicates   that   a   packet   has   been   received   at   the   destination  and  that  is  why  it  is  safe  to  put  another  one  in  flight.  

When   an   acknowledgement   (ACK)   arrives,   confirming   only   part   of   the   packets   in   the   congestion  window,  New  Reno  assumes  this  ACK  points  to  a  loss  hole  in  the  sequence  space   and  a  new  packet  beyond  the  confirmed  sequence  number  is  resend.  This  allows  New  Reno  to   fill  holes  in  the  sequencing  space  while  effectively  maintaining  the  high  throughput  during  the   hole-­‐filling  process.  This  behavior  is  similar  to  TCP  SACK  (Selective  Acknowledgment)  [9]  but   New  Reno  outperforms  it  with  high  error  rates.  


4.  TCP  in  MANETs  


The  performance  of  TCP  dramatically  degrades  in  MANETs.  This  happens  because  of  the   way   TCP   was   designed,   namely   with   wired   networks   in   concern,   although   theoretically   it   shouldn’t  make  a  difference.  Because  the  bit  error  rate  (BER)  in  wired  networks  is  significantly   lower  than  in  wireless  networks  (much  less  than  1%  [10]),  TCP  assumes  that  all  packet  losses   are  due  to  congestion  of  the  network  infrastructure.  So,  the  root  of  the  problem  regarding   TCP’s   performance   in   MANETs   is   its   congestion   control   algorithm.   Due   to   the   significantly   higher   error   rate   in   wireless   network   and   events   such   as   route   recomputations,   network   partitioning  and  route  failures,  the  probability  of  a  packet  loss  is  much  greater.  This  packet   loss,  that  is  result  of  routing  failures  and  wireless  errors,  is  then  misinterpreted  by  TCP  to  be   an  indication  of  congestion,  which  is  not  the  case.  

Another  aspect  of  TCP  is  the  retransmission  timer  (RTO).  It  keeps  track  of  how  long  to   wait   for   an   acknowledgement   and   when   to   retransmit.   It’s   initialized   to   3   sec.   when   a   connection   is   established   and   its   value   is   maintained/recalculated   via   Karn's   [50]   and   Jacobson's   [51]   algorithms   with   the   usage   of   the   Round-­‐Trip   Time   (RTT)   values   [30].   RTO’s   value   is   doubled   each   time   it   expires   and   a   certain   packet   is   retransmitted   and   with   route   recomputations  and  network  partitioning,  this  timer  can  be  inflated  which  means  that  it  takes   a  long  time  before  a  packet  loss  is  detected  and  the  packet  is  retransmitted.  Since  this  timer  is   sensitive   to   delay   variations   (jitter),   it   can   also   be   inflated   if   the   link   layer   is   more   or   less   reliable,  because  the  varying  amount  of  link  layer  retransmissions  will  cause  delay  fluctuations   for  the  delivered  packets.  

TCP   estimates   network   capacity   as   a   counter   of   allowed   unacknowledged   packets   in   flight,   namely   the   congestion   window   (cwnd).   In   a   mobile/dynamic   environment   with   frequent  route  recomputations  this  value  loses  its  meaning,  as  it  is  only  valid  for  the  route   that   it   has   been   measured   for.   When   the   route   changes   the   new   route   may   have   much   different  characteristics,  which  in  each  case  means  that  TCP  performs  suboptimal.  

A  great  amount  of  research  has  been  invested  in  dealing  with  TCP’s  performance  issues   in  wireless  ad  hoc  networks.  Most  studies  found  in  the  literature  are  base  on  simulations  and   experiments,   and   significantly   less   are   the   analytic   studies   in   the   field.   Most   of   them   are   based  on  the  idea  of  changing  the  functionality  and/or  the  behavior  of  TCP  to  adapt  it  to  the   new  network  environment.  

The  approaches  towards  optimizing  TCP  for  wireless  ad  hoc  networks  can  be  classified   in  three  categories:  

• Cross-­‐layer   approaches   –   implements   techniques   that   involve   the   exchange   of   information   between   two   or   more   OSI   layers   [52]   aiming   to   improve   the   overall   performance.   These   techniques   allow   the   use   of   exotic   strategies   and   offer   great   flexibility,   but   require   modifications   to   the   cross-­‐communicating   protocols.   Changes   may  or  may  not  be  needed  at  the  intermediate  agents.  

• Layered  approaches  –  implements  techniques  that  are  constrained  in  a  single  OSI  layer   [52].   These   techniques   are   easier   to   implement   and   maintain,   require   fewer   modifications  to  the  protocol  stack,  but  lack  the  power  and  flexibility  of  the  cross-­‐layer   solutions.  Changes  may  or  may  not  be  needed  at  the  intermediate  agents.  

• Alternative  transport  protocols  –  exploits  the  idea  of  totally  scraping  TCP  as  a  transport   protocol   and   implement   an   entirely   new   one,   with   or   without   similarities   to   existing  


transport  protocols.  This  approach  requires  most  ingenuity  and  may  yield  good  results   but   raises   the   issue   of   compatibility   with   other   networks.   If   the   new   protocol   is   implemented  in  an  isolated  network,  it  will  work  seamlessly,  but  isolated  networks  are   hard  to  come  across  in  real  life.  If  the  network  is  to  be  connected  to  the  outside  world,   the   use   of   special   protocol-­‐translating   proxy/gateway   is   required,   which   has   its   drawbacks.  Another  solution  is  to  introduce  generic-­‐protocol  encapsulation  for  the  new   transport  protocol  in  order  to  deliver  it  to  its  destination,  where  the  receiver  must  be   equipped   to   understand   the   alternative   transport   protocol.   This   also   has   its   shortcomings  like  bigger  overhead,  more  processing  power,  etc.  

In  the  following  few  sections,  notable  examples  for  each  of  the  previously  mentioned   approaches   will   be   given.   Some   of   the   solutions   a   strictly   specialized   for   certain   scenarios,   others  try  to  offer  a  more  universal  solution.  Most  of  the  researches  are  base  on  simulations   and   experiments   and   offer   numerical   results   that   can   be   compared,   although   the   test   scenarios  differ.  

4.1.  Cross-­‐layer  solutions  

TCP-­‐Feedback  [11]  is  an  approach  that  relies  on  feedback  from  the  network  to  handle   route  failures  in  MANETs.  The  idea  is  to  enable  the  TCP  sender  to  distinguish  between   route-­‐brake  induced  losses  and  those  due  to  network  congestion.  When  routing  agent   of   a   node   detects   a   route   break,   it   sends   back   a   Route   Failure   Notification   (RFN)   message  to  the  source.  On  receiving  the  RFN,  the  sender  goes  into  a  snooze  state.  A  TCP   sender  in  snooze  state  will  stop  sending  packets,  and  will  freeze  all  its  variables,  such  as   timers   and   pending   congestion   window   size.   The   TCP   sender   remains   in   this   snooze   state  until  it  is  notified  of  the  restoration  of  the  route  through  Route  Re-­‐establishment   Notification  (RRN)  message.  On  receiving  the  RRN,  the  TCP  sender  will  leave  the  snooze   state  and  will  resume  transmission  based  on  the  previous  sender  window  and  timeout   values.  To  avoid  a  dead-­‐lock  scenario  in  the  snooze  state,  when  the  TCP  sender  receives   RFN,   it   triggers   a   route   failure   timer.   When   this   timer   expires   the   congestion   control   algorithm  is  invoked  normally.  The  authors  report  an  improvement  by  using  TCP-­‐F  over   TCP.  The  simulation  scenario  is  basic  and  is  not  based  on  an  ad  hoc  network.  Instead,   they   emulate   the   behavior   of   an   ad   hoc   network   from   the   viewpoint   of   a   transport   layer.  

TCP-­‐ELFN   (Explicit   Link   Failure   Notification)   [12]   is   similar   to   TCP-­‐F   [11].   However   in   contrast  to  TCP-­‐F,  the  evaluation  of  the  proposal  is  based  on  a  real  interaction  between   TCP  and  the  routing  protocol.  This  interaction  aims  to  inform  the  TCP  agent  about  route   failures  when  they  occur.  The  authors  use  an  ELFN  message,  which  is  transported  by  the   route  failure  message  sent  by  the  routing  protocol  to  the  sender.  The  ELFN  message  is   essentially  similar  to  ICMP  [17]  Destination  Unreachable  -­‐  Source  Route  Failed  message,   which   contains   the   sender,   receiver   addresses   and   ports,   as   well   as   the   TCP   packet   sequence  number.  On  receiving  the  ELFN  message,  the  source  responds  by  disabling  its   retransmission  timers  and  enters  a  "frozen"  state.  During  the  "frozen"  period,  the  TCP   sender  probes  the  network  to  check  if  the  route  is  restored.  If  the  acknowledgment  of  


tie   probe   packet   is   received,   TCP   sender   leaves   the   "frozen"   mode,   resumes   its   retransmission   timers,   and   continues   the   normal   operations.   In   the   mentioned   reference,   the   authors   study   the   effect   of   varying   the   time   interval   between   probe   packets.  Also,  they  evaluate  the  impact  of  the  RTO  and  the  Congestion  Window  upon   restoration  of  the  route.  They  find  that  a  probe  interval  of  2  sec.  performs  the  best,  and   they  suggest  making  this  interval  a  function  of  the  RTT  instead  of  giving  it  a  fixed  value.  

For   the   RTO   and   cwnd   values   upon   route   restoration,   they   find   that   using   the   prior   values  before  route  failure  performs  better  than  initializing  cwnd  to  1  packet  and/or  the   RTO  to  3  sec.,  the  latter  value  being  the  initial  default  value  of  RTO  in  TCP  Reno  and   New   Reno   versions.   This   technique   provides   significant   enhancements   over   standard   TCP,  but  further  evaluations  are  still  needed.  For  instance,  different  routing  protocols   should   be   considered   other   than   the   reactive   protocol   DSR   [13],   especially   proactive   protocols  such  as  OLSR  [14].  Also,  values  other  than  2  sec.  for  the  probe  interval  should   be  checked  as  well.  

Ad-­‐Hoc   TCP   [15]   is   implemented   as   a   layer   between   IP   and   TCP   and   manages   the   operation  of  TCP.  More  specifically,  it  handles  non-­‐congestion  related  packet  loss,  route   changes,   network   partitioning,   packet   reordering,   congestion   and   congestion   window   management.   Ad-­‐hoc   TCP   works   by   intercepting   packets   destined   for   TCP,   inspects   them  and  based  on  a  certain  algorithm  puts  the  sender  into  persist  state,  congestion   control   state   or   retransmit   state.   When   ICMP   [17]   Destination   Unreachable   messages   arrive   at   Ad-­‐hoc   TCP,   it   assumes   network   partitioning   and   the   sender   is   put   into   a   persist  state  to  halt  data  transmission  while  Ad-­‐hoc  TCP  is  put  into  a  disconnected  state.  

The  persist  state  consists  of  sending  periodic  probes,  which  are  acknowledged  when  the   network   converges   and   Ad-­‐hoc   TCP   is   then   put   into   normal   state   and   removes   the   sender   from   the   persist   state.   Ad-­‐hoc   TCP   then   also   controls   the   congestion   window   because   the   capacity   of   the   new   network   path   is   unknown   and   must   be   probed.   To   detect   congestion,   Ad-­‐hoc   TCP   relies   on   ECN   (Explicit   Congestion   Notification)   [16]  

messages.  Upon  the  reception  of  ECN-­‐tagged  packets,  Ad-­‐hoc  TCP  enters  a  congested   state,  lets  TCP  do  its  congestion  control  and  ignores  duplicate  acknowledgements.  To   separate  error  loss  from  congestion  loss,  Ad-­‐hoc  TCP  counts  acknowledgements.  When   a  triple  DUPACK  has  arrived  or  when  the  RTO  expires,  Ad-­‐hoc  TCP  puts  the  sender  into   persist   mode   and   itself   into   loss   state.   Ad-­‐hoc   TCP   then   proceeds   by   doing   retransmissions,  while  avoiding  triggering  the  congestion  control,  as  packets  were  not   lost   because   of   congestion.   When   a   true   acknowledgments   arrives,   the   sender   is   removed   from   the   persist   mode   and   Ad-­‐hoc   TCP   enters   its   normal   state.   The   last   mechanism  also  provides  resiliency  to  packet  reordering.  When  packets  are  reordered   enough   to   normally   cause   congestion   control,   Ad-­‐hoc   TCP   intercepts   the   triple   DUPACKs,  and  puts  the  sender  into  the  persist  state.  When  acknowledgements  for  the   reordered   packets   arrive,   the   sender   is   brought   out   of   persist   state,   without   invoking   congestion  control.  

TCP-­‐BuS  (Buffering  capability  and  Sequence  information)  [18],  like  previous  proposals,   uses  the  network  feedback  in  order  to  detect  route  failure  events  and  to  take  adequate  


reaction   to   this   event.   The   novel   element   in   this   proposal   is   the   introduction   of   buffering  capability  in  mobile  nodes.  The  authors  select  the  source-­‐initiated  on-­‐demand   ABR  (Associativity-­‐Based  Routing)  [19]  routing  protocol.  It  employs  explicit  notifications   for  route  failures  and  route  reestablishment.  These  messages  are  called  Explicit  Route   Disconnection  Notification  (ERDN)  and  Explicit  Route  Successful  Notification  (ERSN).  On   receiving  the  ERDN  from  the  node  that  detected  the  route  failure,  called  the  Pivoting   Node  (PN),  the  source  stops  sending.  And  similarly  after  route  reestablishment  by  the   PN   using   a   Localized   Query   (LQ),   the   PN   will   transmit   the   ERSN   to   the   source.   On   receiving   the   ERSN,   the   source   resumes   data   transmission.   During   the   Route   ReConstruction   (RRC)   phase,   packets   along   the   path   from   the   source   to   the   PN   are   buffered.  To  avoid  timeout  events  during  the  RRC  phase,  the  retransmission  timer  value   for  buffered  packets  is  doubled.  As  the  retransmission  timer  value  is  doubled,  the  lost   packets   along   the   path   from   the   source   to   the   PN   are   not   retransmitted   until   the   adjusted  retransmission  timer  expires.  To  overcome  this,  an  indication  is  made  to  the   source   so   that   it   can   retransmit   these   lost   packets   selectively.   When   the   route   is   restored,  the  destination  notifies  the  source  about  the  lost  packets  along  the  path  from   the  PN  to  the  destination.  On  receiving  this  notification,  the  source  simply  retransmits   these  lost  packets.  But  the  packets  buffered  along  the  path  from  the  source  to  the  PN   may  arrive  at  the  destination  earlier  than  the  retransmitted  packets.  So  the  destination   will  reply  by  duplicate  ACK.  These  unnecessary  request  packets  for  fast  retransmission   are  avoided.  In  order  to  guarantee  the  correctness  of  TCP-­‐BuS,  the  authors  propose  to   transmit   reliably   the   routing   control   messages   ERDN   and   ERSN.   The   reliable   transmission   is   done   by   overhearing   the   channel   after   transmitting   the   control   messages.   If   a   node   has   sent   a   control   message   but   did   not   overhear   this   message   relayed   during   a   timeout,   it   will   conclude   that   the   control   message   is   lost   and   it   will   retransmit   this   message.   This   proposal   introduces   many   new   techniques   for   TCP’s   improvement.  The  novel  contributions  of  this  paper  are  the  buffering  techniques  and   the   reliable   transmission   of   control   messages.   In   their   evaluation,   the   authors   found   that  TCP-­‐BuS  outperforms  the  standard  TCP  and  the  TCP-­‐F  under  different  conditions.  

The  evaluation  is  based  only  on  the  ABR  routing  protocol  and  different  routing  protocol   should  be  taken  into  account.  

Split   TCP   [20]:   TCP   connections   that   have   large   number   of   hops   suffer   from   frequent   route  failures  due  to  mobility.  To  improve  the  throughput  of  these  connections  and  to   resolve  the  unfairness  problem,  the  Split  TCP  scheme  was  introduced  to  split  long  TCP   connections   into   shorter   localized   segments.   The   interfacing   node   between   two   localized  segments  is  called  a  proxy.  The  routing  agent  decides  if  its  node  has  the  role  of   proxy   according   to   the   inter-­‐proxy   distance   parameter.   The   proxy   intercepts   TCP   packets,  buffers  them  and  acknowledges  their  receipt  to  the  source  (or  previous  proxy)   by  sending  a  local  acknowledgment  (LACK).  A  proxy  is  also  responsible  for  delivering  the   packets,  at  an  appropriate  rate  to  the  next  local  segment.  Upon  the  receipt  of  a  LACK   (from  the  next  proxy  or  from  the  final  destination),  a  proxy  will  purge  the  packet  from   its   buffer.   To   ensure   the   source   to   destination   reliability,   an   ACK   is   sent   by   the  


destination  to  the  source  similarly  to  the  standard  TCP.  In  fact,  this  scheme  splits  also   the   transport   layer   functionalities   into   those   end-­‐to-­‐end   reliability   and   congestion   control.   This   is   done   by   using   two   transmission   windows   at   the   source   which   are   the   congestion   window   and   the   end-­‐to-­‐end   window.   The   congestion   window   is   a   sub-­‐

window   of   the   end-­‐to-­‐end   window.   While   the   congestion   window   changes   in   accordance   with   the   rate   of   arrival   of   LACKs   from   the   next   proxy,   the   end-­‐to-­‐end   window  will  change  in  accordance  with  the  rate  of  arrival  of  the  end-­‐to-­‐end  ACKs  from   the  destination.  At  each  proxy  there  would  be  a  congestion  window  that  would  govern   the   rate   of   sending   between   proxies.   Simulation   results   show   that   an   inter-­‐proxy   distance  between  3  and  5  hops  provides  up  to  30%  improvement  on  throughput  while   fairness  is  maintained.  This  method  has  significant  drawbacks  such  as  large  buffers  and   network  overhead.  


Figure  4  –  Split  TCP’s  operation   4.2.  Layered  solutions  

TCP   Westwood   [21]   is   a   sender-­‐side-­‐only   modification   to   TCP   New   Reno   that   is   intended   to   better   handle   large   bandwidth-­‐delay   product   paths   (large   pipes)   with   potential  packet  loss  due  to  transmission  or  other  errors  (leaky  pipes)  and  with  dynamic   load   (dynamic   pipes).   TCP   Westwood   relies   on   monitoring   the   ACK   stream   for   information   to   help   it   better   set   the   congestion   control   parameters   -­‐   Slow   Start   Threshold   (ssthresh)   and   Congestion   Window   (cwnd).   In   TCP   Westwood,   an   "Eligible   Rate"   is   estimated   and   used   by   the   sender   to   update   ssthresh   and   cwnd   upon   loss   indication  or  during  its  "Agile  Probing"  phase,  a  proposed  modification  to  the  Slow  Start   phase.   In   addition,   a   scheme   called   Persistent   Non-­‐Congestion   Detection   (PNCD)   has   been  devised  to  detect  persistent  lack  of  congestion  and  induce  an  Agile  Probing  phase   to  utilize  large  dynamic  bandwidth.  Significant  efficiency  gains  can  be  obtained  for  large   leaky  dynamic  pipes,  while  maintaining  fairness.  TCP  Westwood+  is  an  evolution  of  TCP   Westwood,   in   fact   it   was   soon   discovered   that   the   Westwood   bandwidth   estimation   algorithm  did  not  work  well  in  the  presence  of  reverse  traffic  due  to  ACK  compression.  

TCP-­‐Jersey   [22]   adopts   a   similar   to   TCP   Westwood’s   idea   of   estimating   the   available   bandwidth  at  the  sender  by  observing  the  rate  of  the  returning  ACKs,  but  uses  a  rather   simple   estimator.   The   bandwidth   estimator   proposed   in   TCP-­‐Jersey   is   derived   from   a   time-­‐sliding  window  (TSW)  estimator.  Intermediate  nodes  warn  sender  of  congestion  by   employing  the  estimator  to  estimate  the  bandwidth  occupied  by  individual  flows.  TCP-­‐





Relaterade ämnen :