Mosek Userguide
Mosek Userguide
From  this  example  the  input  arguments  for  the  linear  program  (2.1)  follows  easily  (refer
to  the  denitions  of  input  arguments  in  Figure  2,  Section  2.1).
Objective   The  string  sense  is  the  objective  goal   and  could  be  either  minimize,
min, maximize or max.   The dense numeric vector c species the coecients
in front of the variables in the linear objective function, and the optional constant
scalar   c0   (reads:   c  zero)   is   a  constant   in  the  objective  corresponding  to  c
0
  in
problem  (2.1),  that  will  be  assumed  zero  if  not  specied.
Constraint  Matrix   The  sparse  matrix  A  is  the  constraint  matrix  of  the  problem
with  the  constraint  coecients  written  row-wise.   Notice  that  for  larger  problems
it may be more convenient to dene an empty sparse matrix and specify the non-
zero  elements  one  at  a  time  A(i, j) = a
ij
,  rather  than  writing  out  the  full  matrix
as  done  in  the  lo1  example.   E.g.   Matrix(0,nrow=30,ncol=50,sparse=TRUE).
Bounds   The  constraint  bounds  bc  with  rows  blc  (constraint  lower  bound)  and  buc
(constraint upper bound), as well as the variable bounds bx with rows blx  (variable
lower  bound)  and  bux  (variable  upper  bound),   are  both  given  as  dense  numeric
matrices.   These  are  equivalent  to  the  bounds  of  problem  (2.1),   namely  l
c
,   u
c
,   l
x
and  u
x
.
18
2.2   Linear  Programming   A  GUIDED  TOUR
  Errors,  warnings  and  response  codes
If the mosek  function is executed with a problem description as input, a log of the interface
and  optimization  process   is   printed  to  the   screen  revealing  any  errors   or   warnings   the
process may have encountered.   As a rule of thumb, errors will be given when a crucial part
of   the  problem  description  is  missing,   or  when  an  input  argument  is  set  to  a  value  that
does not  make sense or  is formatted  incorrectly.   Warnings  on  the other  hand  will  be given
if  some  ignorable  part  of  the  problem  has  an  empty  denition  (NULL,  NA  or  NaN),  or  if  the
interface has to convert or otherwise guess on an interpretation of input on a non-standard
form.   Errors   will   always   interrupt   the  optimization  process   whereas   warnings   will   not.
Since  warnings  can  hold  valuable  debugging  information  and  may  be  important  to  notice,
they are both printed in the log at the time they occurred and later summarized just before
the  interface  returns.
Error  messages  works  ne  when  you  are  interacting  with  the  interface,   but  in  automated
optimization  frameworks  they  are  not  easily  handled.   This  is  why  a  response  is  always
returned  as  part  of   the  result,   when  calling  a  function  that  may  produce  errors  (see  e.g.
Figure 2).   The response is a list containing a code and msg.   When an error happens inside
a function call to the MOSEK optimization library, the code is the response code returned
by  the  function  call
8
and  msg  is  the  corresponding  error  message.   When  an  error  happens
within  the  interface,  the  code  equals  NaN  and  the  msg  is  the  error  message  which,  in  case
of  unexpected  execution  paths,  may  require  technical  knowledge  to  understand.   When  no
errors  are  encountered,   the  code  is  zero,   but  beware  that  comparison  operators  such  as
==  does  not  handle  NaN  very  good.   Using  identical  may  be  a  better  alternative.
NB!  The  interface  may  return  only  partially  constructed  output.
(always  check  the  response  code;  e.g.   success   <-   identical(response_code,   0))
  Interpreting  the  solution
The default optimizer for linear problems is the interior-point algorithm in MOSEK which
returns two solutions in the output structure.   The interior-point solution (called itr) is the
solution found directly by the interior-point algorithm.   The basic solution (called bas) is a
vertex  solution  derived  from  the  values  of  itr,   and  could  for  instance  be  used  to  hot-start
the  simplex  method  if  small  changes  was  applied  at  some  point  later.   If  another  optimizer
using  the   simplex  method  was   selected  instead  of   the   interior-point   algorithm,   the   bas
solution  would  have  been  found  directly  and  the  itr  solution  would  not  exist.
8
Check  out:   mosek.com  Documentation  C  API  manual  Response  codes.
19
2.2   Linear  Programming   A  GUIDED  TOUR
The   lo1   Example   (Part   2  of   5)   The   lo1   example   was   solved  using  the   default
optimizer (the interior-point algorithm) and contains two solutions:   the interior-point (itr)
and  the  basic  solution  (bas)  partly  shown  here.
As seen in Figure 5 and Figure 6 the solution space of the problem was not empty (as it
is primal feasible) and the objective was not unbounded (as it is dual feasible).   In addition
the  optimizer  was  able  to  identify  the  optimal  solution.
Figure  5:   Primal  Solution  I  (lo1)
r$sol$itr
{
$solsta
[1]   "OPTIMAL"
$prosta
[1]   "PRIMAL_AND_DUAL_FEASIBLE"
$skc
[1]   "EQ"   "SB"   "UL"
$skx
[1]   "LL"   "LL"   "SB"   "SB"
$skn
character (0)
$xc
[1]   30.00000   53.33333   25.00000
$xx
[1]   1.044111e-08   2.856068e-08   1.500000e+01   8.333333e+00
...
}
Figure  6:   Primal  Solution  II  (lo1)
r$sol$bas
{
$solsta
[1]   "OPTIMAL"
$prosta
[1]   "PRIMAL_AND_DUAL_FEASIBLE"
$skc
[1]   "EQ"   "BS"   "UL"
$skx
[1]   "LL"   "LL"   "BS"   "BS"
$xc
[1]   30.00000   53.33333   25.00000
$xx
[1]   0.000000   0.000000   15.000000   8.333333
...
}
Notice  that  the  basic  solution  bas  is  likely  to  have  a  higher  numerical   accuracy  than  the
interior-point  solution  itr  as  is  the  case  in  this  example  considering  the  xx  variables.
20
2.2   Linear  Programming   A  GUIDED  TOUR
The  solution  you  receive  from  the  interface  will  contain  the  primal  variable  x  (called  xx)
and  the  activity  of   each  constraint,   xc,   dened  by  x
c
=  Ax.   From  the  solution  status
(called  solsta)   it   can  be  seen  how  good  this   solution  is,   e.g.   optimal,   nearly  optimal,
feasible  or  infeasible.   If   the  solution  status  is  not  as  you  expected,   it  might  be  that  the
problem  is   either   ill-posed,   infeasible  or   unbounded  (dual   infeasible).   This  can  be  read
from  the  problem  status  (called  prosta).   The  solution  and  problem  status  are  based  on
certicates   found  by  the   MOSEK  optimization  library
9
,   and  should  always   be   veried
before  the  returned  solution  values  are  used  (see  Section  2.2.5).
The solution also contains status key vectors for both variables and constraints (called  skx
and  skc).   The  variable  skn  is  not  useful   for  linear  programming  problems.   Each  status
key  will  be  one  of  the  following  two-character  strings.
BS   :   Basic
In   basic   (bas)   solutions:   The   constraint   or   variable   belongs   to   the   basis   of   the
corresponding  simplex  tableau.
SB   :   Super   Basic
In interior-point (itr) and integer (int) solutions:   The activity of a constraint or variable
is  in  between  its  bounds.
LL   :   Lower   Level
The  constraint  or  variable  is  at  its  lower  bound.
UL   :   Upper   Level
The  constraint  or  variable  is  at  its  upper  bound.
EQ   :   Equality
The  constraint  or  variable  is  at  its  xed  value  (equal  lower  and  upper  bound).
UN   :   Unknown
The  status   of   the  constraint   or   variable  could  not   be  determined  (in  practice  never
returned  by  MOSEK).
In addition to the primal variables just presented, the returned solutions also contains dual
variables  not  shown  here.   The  dual   variables  can  be  used  for  sensitivity  analysis  of   the
problem  parameters  and  are  related  to  the  dual  problem  explained  in  Section  2.2.2.
9
More  details  on  problem  and  solution  status  keys  available  at:
mosek.com  Documentation  Optimization  tools  manual  Symbolic  constants  reference.
21
2.2   Linear  Programming   A  GUIDED  TOUR
2.2.2   Duality
The  dual   problem  corresponding  to  the  primal   problem  (2.1)  dened  in  Section  2.2.1,   is
shown below in  (2.4).   Notice that the coecients of the dual problem is the same as those
used  in  the  primal   problem.   Matrix  A  for   example  is  still   the  constraint   matrix  of   the
primal  problem,  merely  transposed  in  the  dual  problem.
In  addition,  the  dual  problem  have  dual  variables  for  each  lower  and  upper,  constraint
and  variable  bound  in  the  primal  problem:   s
c
l
   R
m
,  s
c
u
   R
m
,  s
x
l
    R
n
and  s
x
u
   R
n
(the
latter  being  the  dual  variable  of  the  upper  variable  bound).
The  dual  problem  is  given  by
maximize   (l
c
)
T
s
c
l
 (u
c
)
T
s
c
u
 + (l
x
)
T
s
x
l
 (u
x
)
T
s
x
u
 + c
0
subject  to   A
T
(s
c
l
 s
c
u
) + s
x
l
 s
x
u
  =   c ,
s
c
l
, s
c
u
, s
x
l
 , s
x
u
  0 .
(2.4)
Recall   that  equality  constraints  had  to  be  specied  using  two  inequalities  (with  l
c
=  u
c
)
by denition of the primal problem (2.1).   This means that an equality constraint will have
two  dual  variables instead  of  just  one.   If  the user  wants  to calculate the  one dual  variable,
as  it  would  have  been  if  equality  constraints  could  be  specied  directly,  then  this  is  given
by  s
c
l
  s
c
u
.   However,   it  is  not  always  recommended  to  do  so,   as  it  is  often  easier  to  stay
with  the  same  problem  formulation  and  do  all  calculations  directly  on  that.
The  lo1   Example  (Part  3  of   5)   The  part  of  the  solution  to  the  lo1   example  that
was  previously  omitted,   is  now  shown  in  Figure  7  and  8.   The  dual   variables  slc,   suc,   slx
and  sux  corresponds  naturally  to  s
c
l
,  s
c
u
,  s
x
l
  and  s
x
u
  in  the  dual  problem.   The  variable  snx
is  not  useful  for  linear  programming  problems.
Looking  at   the   denition  of   the   lo1   problem  (2.2),   the   rst   constraint   is   an  equality,
the  second  is   an  lower   bound,   and  the  third  is   an  upper   bound.   The  dual   variable  of
the  inequalities  should  just  be  read  from  slc  and  suc  respectively,   while  for  the  equality
constraint,   having  two  dual   variables,   you  could  also  look  at  the  combined  lower  minus
upper  constraint  dual   variable  (slc  -  suc),   which  in  this  case  would  give  you  a  dual   value
of  2.5.
22
2.2   Linear  Programming   A  GUIDED  TOUR
Figure  7:   Dual  Solution  I  (lo1)
r$sol$itr
{
$solsta
[1]   "OPTIMAL"
$prosta
[1]   "PRIMAL_AND_DUAL_FEASIBLE"
...
$slc
[1]   0.000000e+00   1.557535e-09   0.000000e+00
$suc
[1]   -2.5000000   0.0000000   -0.3333333
$slx
[1]   -4.500000e+00   -2.166667e+00   -4.982961e-09   -1.027032e-08
$sux
[1]   0.0000e+00   5.5856e-10   0.0000e+00   0.0000e+00
$snx
[1]   0   0   0   0
}
Figure  8:   Dual  Solution  II  (lo1)
r$sol$bas
{
$solsta
[1]   "OPTIMAL"
$prosta
[1]   "PRIMAL_AND_DUAL_FEASIBLE"
...
$slc
[1]   0   0   0
$suc
[1]   -2.5000000   0.0000000   -0.3333333
$slx
[1]   -4.500000   -2.166667   0.000000   0.000000
$sux
[1]   0   0   0   0
}
Notice  that  the  basic  solution  bas  is  likely  to  have  a  higher  numerical   accuracy  than  the
interior-point  solution  itr  as  is  the  case  here.
2.2.4   Hot-starting
Hot-starting  (also  known  as   warm-starting)   is   a  way  to  make  the  optimization  process
aware  of   a  point  in  the  solution  space  which,   depending  on  the  quality  of   it  (feasibility
and  closeness  to  the  optimal   solution),   can  increase  the  speed  of   optimization.   In  linear
programming it is typically used when you know the optimal solution to a similar problem
with  only  few  small  changes  to  the  constraints  and  objective.   In  these  cases  it  is  assumed
that the next optimal solution is nearby in the solution space, and thus it would also makes
sense  to  switch  to  the  simplex  optimizers  excellent  for  small   changes  to  the  set  of   basic
variables  -  even  on  large  problems.   In  fact,  currently,  the  user  will  have  to  use  one  of  the
simplex  optimizers  for  hot-starting  in  linear  programming,  as  the  interior-point  optimizer
in  MOSEK  cannot  take  advantage  of  initial  solutions.
Simplex  optimizers  only  look  for  the  basic  solution bas  in  the  input  argument $sol,  and  do
not  consider  the  solution  and  problem  statuses  within.   These  may  however  be  specied
anyway  for   the   convenience   of   the   user,   and  warnings   will   only  be   given  if   no   useful
information  could  be   given  to  the   MOSEK  optimizer   despite   the   fact   that   $sol   had  a
non-empty  denition  and  hot-starting  was  attempted.
25
2.2   Linear  Programming   A  GUIDED  TOUR
  When  adding  a  new  variable
In  column  generation  it  is  necessary  to  reoptimize  the  problem  after  one  or  more  variables
have been added to the problem.   Given a previous solution to the problem,  the number of
basis  changes  would  be  small  and  we  can  hot-start  using  a  simplex  optimizer.
Assume  that  we  would  like  to  solve  the  problem
maximize   3x
1
  +   1x
2
  +   5x
3
  +   1x
4
     1x
5
subject  to   3x
1
  +   1x
2
  +   2x
3
     2x
5
  =   30,
2x
1
  +   1x
2
  +   3x
3
  +   1x
4
     10x
5
     15,
2x
2
  +   3x
4
  +   1x
5
     25,
(2.5)
having  the  bounds
0      x
1
     ,
0      x
2
     10,
0      x
3
     ,
0      x
4
     ,
0      x
5
     ,
(2.6)
which  is  equal   to  the  lo1   problem  (2.2),   except  that  a  new  variable  x
5
  has  been  added.
To  hot-start  from  the  previous  solution  of  Figure  4,   which  is  still   primal   feasible,   we  can
expand  the  problem  description  and  include  x
5
  as  shown.
Figure  11:   Hot-starting  when  a  new  variable  is  added
#   Define   lo1    and   obtain   the   solution   r   (not   shown)
lo1_backup   <-   lo1
#   Append   the   new   variable   to   the   problem:
lo1$c   <-   c(lo1$c ,   -1)
lo1$bx   <-   cbind(lo1$bx ,   c(0,Inf))
lo1$A   <-   cBind(lo1$A ,   c(-2,-10,1))
#   Extend   and   reuse   the   old   basis   solution:
oldbas   <-   r$sol$bas
oldbas$skx   <-   c(oldbas$skx ,   LL )
oldbas$xx   <-   c(oldbas$xx   ,   0)
oldbas$slx   <-   c(oldbas$slx ,   0)
oldbas$sux   <-   c(oldbas$sux ,   0)
#   Hot -start   the   simplex   optimizer:
lo1$iparam   <-   list(OPTIMIZER =" OPTIMIZER_PRIMAL_SIMPLEX ")
lo1$sol   <-   list(bas=oldbas)
r_var   <-   mosek(lo1)
26
2.2   Linear  Programming   A  GUIDED  TOUR
  When  xing  a  variable
In  branch-and-bound  methods  for  integer  programming  it  is  necessary  to  reoptimize  the
problem  after  a  variable  has  been  xed  to  a  value.   From  the  solution  of  the  lo1  problem
(Figure  4),  we  x  the  variable  x
4
  = 2,  and  hot-start  using
Figure  12:   Hot-starting  when  a  variable  has  been  xed
#   Define   lo1    and   obtain   the   solution   r   (not   shown)
lo1_backup   <-   lo1
#   Fix   the   fourth   variable   in   the   problem
lo1$bx [,4]   <-   c(2,2)
#   Reuse   the   old   basis   solution
oldbas   <-   r$sol$bas
#   Hotstart   the   simplex   optimizer
lo1$iparam   <-   list(OPTIMIZER =" OPTIMIZER_DUAL_SIMPLEX ")
lo1$sol   <-   list(bas=oldbas)
r_fix   <-   mosek(lo1)
  When  adding  a  new  constraint
In  cutting  plane  algorithms  it  is  necessary  to  reoptimize  the  problem  after  one  or  more
constraints have been added to the problem.   From the solution of the lo1 problem (Figure
4),  we  add  the  constraint  x
1
 + x
2
  2,  and  hot-start  using
Figure  13:   Hot-starting  when  a  constraint  has  been  added
#   Define   lo1    and   obtain   the   solution   r   (not   shown)
lo1_backup   <-   lo1
#   Append   the   new   constraint   to   the   problem:
lo1$bc   <-   cbind(lo1$bc ,   c(2,Inf))
lo1$A   <-   rBind(lo1$A ,   c(1,1,0,0))
#   Extend   and   reuse   the   old   basis   solution:
oldbas   <-   r$sol$bas
oldbas$skc   <-   c(oldbas$skc ,   LL )
oldbas$xc   <-   c(oldbas$xc   ,   2)
oldbas$slc   <-   c(oldbas$slc ,   0)
oldbas$suc   <-   c(oldbas$suc ,   0)
#   Hot -start   the   simplex   optimizer:
lo1$iparam   <-   list(OPTIMIZER =" OPTIMIZER_DUAL_SIMPLEX ")
lo1$sol   <-   list(bas=oldbas)
r_con   <-   mosek(lo1)
27
2.2   Linear  Programming   A  GUIDED  TOUR
  Using  numerical  values  to  represent  status  keys
In  the  previous  examples  the  status  keys  of   constraints  and  variables  were  all   dened  as
two-character  string  codes.   Although  this  makes  the  status  keys  easy  to  read,   it  might
sometimes  be  easier  to  work  with  numerical   values.   For  this  reason  we  now  demonstrate
how  to  achieve  this  with  the  R-to-MOSEK  interface.   The  explanation  of  status  keys  can
be  found  on  page  21.
Figure  14:   Hot-starting  using  an  initial  guess
#   Define   lo1    (not   shown)
#   Define   the   status   key   string   array
stkeys   <-   c("BS","SB","LL","UL","EQ","UN")
#   Try   to   guess   the   optimal   status   keys
mybas   <-   list()
mybas$skc   =   stkeys[c(5,1,4)]
mybas$skx   =   stkeys[c(3,3,1,1)]
#   Hot -start   the   simplex   optimizer
lo1$iparam   <-   list(OPTIMIZER =" OPTIMIZER_PRIMAL_SIMPLEX ")
lo1$sol   <-   list(bas=mybas)
r_guess   <-   mosek(lo1)
So  basically  stkeys[[idx]]  will  return  the  status  key  of  index  idx,  and  can  be  vectorized
as  stkeys[idxvec]  for  a  vector  of   indexes  idxvec.   Going  in  the  opposite  direction  from
status  keys  to  indexes  requires  a  bit  more  work.   For  this  you  can  use  match(str,stkeys)
to  nd  the  index  of  a  status  key  str,  and  equivalently  match(strvec,stkeys)  for  a  vector
of  status  keys  strvec.
28
2.2   Linear  Programming   A  GUIDED  TOUR
2.2.5   Complete  Best  Practice  example
Linear  programs,  as  well  as  other  optimization  models,  are  often  solved  as  part  of  a  larger
framework  where  the  solutions   are  not   just   printed  on  the  screen,   but   instead  given  as
input to other scripts and programs.   Such frameworks include mathematical constructions
such  as   branch-and-price,   but   also  include  programs   with  internal   optimization  models
hidden  from  the  end  user.   In  these  cases,  special  attention  must  be  given  to  the  handling
of  function  calls  and  return  values.
  Catch  execution  errors
From  within  the  Rmosek  package,   execution  errors  similar  to  the  ones  generated  by  the
built-in  stop  function,  part  of  the  R  language  denition,  may  be  provoked.   This  typically
happens  when  the  number  of  input  arguments  does  not  match  a  valid  calling  convention,
or  when  one  of   the  input  arguments  can  not  be  evaluated.   Interface  errors  within  the  R
API  or  Matrix  package  could,  however,  also  generate  these  kinds  of  errors.
When  execution  errors  are  provoked  in  some  function,   the  control   is  returned  to  the
outer-most  scope  (the  global   environment)  and  no  values  are  returned  from  the  function
call.   That   is,   when   calling   res   <-   some_function(...)   and   an   execution   error   is
provoked,  the  variable  res  will  remain  unaected  and  could  host  old  obsolete  values.
Luckily,   execution  errors  can  be  caught  with  the  built-in  try  function  which  takes  a
function  call   as   input   argument.   The  result   of   this   try  function  (say  res),   will   either
be  the  returned  value  of   the  function  call   or   some  error   class.   The  error   class   inherits
from  "try-error"   and  can  be  distinguished  from  normal   output   by  the  boolean  result   of
inherits(res,   "try-error").
  Inspect  the  reponse  code
The  response  code  may  either  be  a  zero  (success),  a  positive  integer  (error  in  the  MOSEK
optimization  library)  or  simply  NaN  (error  in  the  Rmosek  interface).   The  list  of  response
codes   that   can  be   returned  by  the   MOSEK  optimization  library,   are   similar   to   those
explained  online
12
.   As   explained  in  Errors,   warnings   and  response   codes   in  Section
2.2.1,  always  use  the  built-in  identical  function  to  compare  reponse  codes.
When an error is encountered, the interface will still try to extract and return a solution.
If  for  instance  the  optimization  procedure  ran  out  of  memory  or  time,  or  was  interrupted
by  the  keyboard  sequence  <CTRL>  +  <C>  (see  e.g.   Section  2.8.2),  a  good  solution  may
have been identied anyway.   However, if the extraction of the solution was what caused the
12
Check  out:   mosek.com  Documentation  C  API  manual  Response  codes.
29
2.2   Linear  Programming   A  GUIDED  TOUR
error in the rst place, the returned solution may only be partially constructed.   Ultimately,
no  guarantees  about  the  availability  of   variables  can  be  given  when  the  response  code  is
not  zero.
In  case  of  a  non-zero  response  code,   the  availability  of  a  variable  within  a  result  res,
can  be  tested  with  an  expression  similar  to  !inherits(try({res$sol$itr$solsta})),
"try-error"),   which  is   TRUE  when  a  solution  status   to  a  interior-point   solution  in  the
result  has  been  dened.   Notice  that  a  scope  has  been  dened  in  the  input  argument  of
the  try  function  using  {  and  }.   Within  this  scope,  several  variables  can  be  tested  for
existence,  or alternatively,  entire blocks of code may be inserted and attempted evaluated.
  Inspect  the  solution  status
The solution status is based on certicates found by the MOSEK optimization library, and
classify  the  returned  solution.   In  the  list  below,   the  two  rst  status  keys  also  exists  for
the  dual   case  with  PRIMAL  replaced  by  DUAL.   Also,   all   these  seven  status  keys  (excluding
UNKNOWN)  has  an  nearly  equivalent  status  key  with  NEAR_  added  as  a  prex.
PRIMAL_INFEASIBLE_CER   (linear   problems   only)
The  returned  data  contains  a  certicate  that  the  problem  is  primal  infeasible.
PRIMAL_FEASIBLE
The  returned  data  contains  primal  feasible  solution.
PRIMAL_AND_DUAL_FEASIBLE
The  returned  data  contains  a  primal  and  dual  feasible  solution,  but  is  not  optimal.
OPTIMAL
The  returned  data  contains  an  optimal  solution.
INTEGER_OPTIMAL   (integer   problems   only)
The  returned  data  contains  an  integer  optimal  solution.
UNKNOWN
The  returned  data  should  not  be  used.
A  response  code  of  zero  is  not  enough  to  conclude  that  a  solution  can  be  extracted  from
the  returned  data.   In  particular,  the  returned  data  may  be  a  certicate  of  primal  or  dual
infeasibility  instead  of  a  solution  to  the  problem.   Moreover,  the  returned  primal  variables
may  not  contain  any  useful  information  if  the  solution  status  is  e.g.   DUAL_FEASIBLE.  This
also holds true for dual variables if the solution status is e.g.   PRIMAL_FEASIBLE. Thus it is
important  to  check  and  respond  to  the  solution  status  before  using  the  solution  variables.
30
2.2   Linear  Programming   A  GUIDED  TOUR
  Inspect  the  problem  status
The  problem  status   is   based  on  certicates   found  by  the  MOSEK  optimization  library,
and  classify  the  specied  problem  description.   In  the  list  below,  the  three  rst  status  keys
also  exists  for  the  dual  case  with  PRIMAL  replaced  by  DUAL,  and  with  PRIMAL  replaced  by
PRIMAL_AND_DUAL.
PRIMAL_INFEASIBLE
The  problem  description  is  primal  infeasible.
NEAR_PRIMAL_FEASIBLE
The  problem  description  satises  a  relaxed  primal  feasibility  criterion.
PRIMAL_FEASIBLE
The  problem  description  is  primal  feasible.
ILL_POSED   (non-linear   problems   only)
The  problem  description  has  an  unstable  formulation.
Regarding  the  usefulness  of   the  returned  data,   the  solution  status  often  tells  the  whole
story.   For  mixed  integer  programs  (see  Section  2.5),   there  is  however  no  certicates  for
infeasibility  and  this  status  have  to  be  read  from  the  problem  status.   Also,  if  the  solution
status is UNKNOWN, the problem status may still contain valuable information.   Furthermore,
for non-linear problems, it is a good idea to verify that the problem status is not ILL_POSED.
The  lo1   Example  (Part   5  of   5)   In  this  example  we  dene  a  function  which  shall
try  to  solve  the  lo1  example  from  Section  2.2.1,   and  return  the  variable  activities  of  the
basic  solution.   The  function  is  completely  silent  except  for  the  explicit   cat-commands,
and  will   not  fail   to  return  a  usable  result  unless  explicitly  stopped  by  a  stop-command.
Furthermore,   the  double-typed  parameter  OPTIMIZE_MAX_TIME  (see  e.g.   the  FAQ,
Section 2.8.3) can be controlled by the input argument maxtime.   If zero, MOSEK will have
no  time  to  identify  a  useful  solution.   The  function  is  shown  in  Figure  15.
As seen, the verbosity is set to zero with the option verbose to specify that the interface
should  be  completely  silent  (see  the  FAQ,   Section  2.8.1).   Also,   any  execution  errors  that
may  be  thrown  by  the  mosek  function  is  caught.   The  function  then  veries  the  response
code,   and  chooses  to  continue  even  this  is  not  zero  (you  may  choose  to  do  otherwise).   It
then  extracts  the  basic  solution  and,  at  the  same  time,  evaluate  whether  all  the  variables
that  we  are  going  to  use  are  dened.   We  do  not  bother  to  check  if  the  problem  status  is
ill-posed,  as  lo1  is  a  linear  program,  but  instead  continue  to  the  solution  status  which  is
only accepted if it is close to optimal (again, you may choose to do otherwise).   Finally the
solution  variable  activities  of  the  solution  is  returned.
31
2.2   Linear  Programming   A  GUIDED  TOUR
Figure  15:   Complete  Best  Practice  lo1  example
get_lo1_solution_variables   <-   function(maxtime)   {
lo1   <-   list(sense="max",   c=c(3,1,5,1))
lo1$A   <-   Matrix(c(3,1,2,0,2,1,3,1,0,2,0,3),
nrow=3,   byrow=TRUE ,   sparse=TRUE)
lo1$bc   <-   rbind(blc=c(30,15,-Inf),   buc=c(30,Inf ,25));
lo1$bx   <-   rbind(blx=c(0,0,0,0),   bux=c(Inf ,10,Inf ,Inf));
lo1$dparam   <-   list(OPTIMIZER_MAX_TIME=maxtime)
r   <-   try(mosek(lo1 ,   list(verbose =0)),   silent=TRUE)
if   (inherits(r,   "try -error "))   {
stop(" Rmosek   failed   somehow !")
}
if   (! identical(r$response$code ,   0))   {
cat(paste ("**" ,   "Response   code:",   r$response$code ,   "\n"))
cat(paste ("**" ,   r$response$msg ,   "\n"))
cat(" Trying   to   continue ..\n")
}
isdef   <-   try({
rbas   <-   r$sol$bas;
rbas$solsta;   rbas$prosta;   rbas$xx;
},   silent=TRUE)
if   (inherits(isdef ,   "try -error "))   {
stop(" Basic   solution   was   incomplete !")
}
switch(rbas$solsta ,
OPTIMAL   =   {
cat("The   solution   was   optimal ,   I   am   happy!\n")
},
NEAR_OPTIMAL   =   {
cat("The   solution   was   close   to   optimal ,   very   good ..\n")
},
#OTHERWISE:
{
cat(paste ("**" ,   "Solution   status:",   rbas$solsta ,   "\n"))
cat(paste ("**" ,   "Problem   status:",   rbas$prosta ,   "\n"))
stop(" Solution   could   not   be   accepted !")
}
)
return(rbas$xx)
}
32
2.3   Conic  Quadratic  Programming   A  GUIDED  TOUR
2.3   Conic  Quadratic  Programming
2.3.1   Solving  CQP  problems
Conic   quadratic   programming
13
(also   known  as   Second-order   cone   programming)   is   a
generalization   of   linear,   quadratic   and   all-quadratic   programming.   Conic   quadratic
programs  can  be  written  as  shown  below,  and  pose  the  properties  of  strong  duality  under
Slaters  condition.   This  is  the  primal  problem.
minimize   c
T
x + c
0
subject  to   l
c
   Ax      u
c
,
l
x
   x      u
x
,
x  (.
(2.7)
The convex cone (  can be written as the Cartesian product over a nite set of convex cones
(  = (
1
   (
p
,  which  basically  means  that  the  variables  can  be  partitioned  into  subsets
of variables belonging to dierent cones.   In principle this also means that each variable can
only belong to one cone, but in practice we can dene several duplicates   x
i
  of x
i
  belonging
to  dierent  cones  and  connected  by   x
i
  = x
i
  in  the  linear  constraints  of   (2.7).
The  MOSEK  optimization  library  currently  allows  three  types  of   convex  cones:   The   R-
cone,   the  quadratic  cone  and  the  rotated  quadratic  cone.   The   R-cone  contains  the  full
set  of   real   numbers  and  is  the  default  cone  in  this  interface  for  variables  with  no  other
specication.   Notice  that  if  all  variables  belonged  to  this  cone  the  problem  would  reduce
to  a  linear  programming  problem.   The  quadratic  cone  is  dened  by
(
t
  =
_
_
_
x  R
nt
:   x
1
 
_
nt
j=2
x
2
j
_
_
_
(2.8)
for   which  the  indexes   shown  here  refer   only  to  the  subset   of   variables  belonging  to  the
cone.   Similarly  the  rotated  quadratic  cone  is  given  by
(
t
  =
_
_
_
x  R
nt
:   2x
1
x
2
 
nt
j=3
x
2
j
,   x
1
  0,   x
2
  0
_
_
_
.   (2.9)
These denitions may seem restrictive, but can model a large number of problems as shown
by  the  transformations  of  Appendix  B.
13
Check   out:   mosek.com    Documentation     Optimization   tools   manual     Modeling     Conic
optimization.
33
2.3   Conic  Quadratic  Programming   A  GUIDED  TOUR
The cqo1 Example (Part 1 of 3)   The following is an example of a conic optimization
problem  with  one  linear  constraint,  non-negative  variables  and  two  cones:
minimize   x
4
 + x
5
 + x
6
subject  to   x
1
 + x
2
 + 2x
3
  =   1 ,
x
1
, x
2
, x
3
     0 ,
x
4
 
_
x
2
1
 + x
2
2
 ,
2x
5
x
6
  x
2
3
 .
(2.10)
The rst cone is of the quadratic cone type (MSK_CT_QUAD), while the second is of the
rotated  quadratic  cone  type  (MSK_CT_RQUAD.)  The  subindexes  of   the  variables  used
to  dene  these  cones  follow  naturally  as  seen  in  the  following  R  code.
Figure  16:   Conic  Quadratic  Optimization  (cqo1)
cqo1   <-   list(sense   =   "min")
cqo1$c   <-   c(0,0,0,1,1,1)
cqo1$A   <-   Matrix(c(1,1,2,0,0,0),
nrow=1,   byrow=TRUE ,   sparse=TRUE)
cqo1$bc   <-   rbind(blc   =   1,   buc   =   1)
cqo1$bx   <-   rbind(blx   =   c(0,0,0,-Inf ,-Inf ,-Inf),
bux   =   rep(Inf ,6))
cqo1$cones   <-   cbind(
list("QUAD",   c(4,1,2)),
list(" RQUAD",   c(5,6,3))
)
rownames(cqo1$cones)   <-   c("type","sub ");
r   <-   mosek(cqo1)
From  this  example  the  input  arguments  for  a  conic  program  (2.7)  follow  easily  (refer  to
Figure  2,  Section  2.1).   The  objective  function,  the  linear  constraints  and  variable  bounds
should  all   be  specied  as  for  linear  programs  (see  Section  2.2),   and  the  only  addition  to
this  is  the  quadratic  cones  specied  in  the  list-typed  matrix  cones.
The  cones  matrix  has  a  column  for  each  cone,  and  a  row  for  each  descriptive  element.
The  rst  row  called  type,  should  specify  the  cone  type  in  a  string,  being  either  quadratic
QUAD or rotated quadratic RQUAD. Notice that the MOSEK library cone type prex
MSK_CT_ is optional.   The second row called sub, should specify the subset of variables
belonging to the cone in a numeric vector - and the ordering does matter!   The ith element
of sub will be the index of the variable referred by x
i
, in the cone denitions (2.8) and (2.9).
As  an  example,  the  rotated  quadratic  cone  with  subindexes  c(4,6,2,3)  would  dene  the
cone
(
t
  =
_
x  R
4
:   2x
4
x
6
  x
2
2
 + x
2
3
,   x
4
  0,   x
6
  0
_
.   (2.11)
34
2.3   Conic  Quadratic  Programming   A  GUIDED  TOUR
Figure  16  showed  a  simple  way  to  specify  cones  given  an  explicit  representation.   In  many
practical  cases,  however,  cones  are  more  conveniently  specied  in  chunks  or  within  a  loop.
For  this  purpose,  preallocation  should  always  be  preferred  as  shown  here.
Figure  17:   Preallocating  cones  (cqo1)
NUMCONES   <-   2
cqo1$cones   <-   matrix(list(),   nrow=2,   ncol=NUMCONES)
rownames(cqo1$cones)   <-   c("type","sub")
cqo1$cones [,1]   <-   list("QUAD",   c(4,1,2))
cqo1$cones [,2]   <-   list("RQUAD",   c(5,6,3))
The  solution  you  receive  from  the  interface  will  contain  the  primal  variable  x  (called  xx)
and  the  activity  of   each  constraint,   xc,   dened  by  x
c
=  Ax.   From  the  solution  status
(called  solsta)   it   can  be  seen  how  good  this   solution  is,   e.g.   optimal,   nearly  optimal,
feasible  or  infeasible.   If   the  solution  status  is  not  as  you  expected,   it  might  be  that  the
problem  is   either   ill-posed,   infeasible  or   unbounded  (dual   infeasible).   This  can  be  read
from  the  problem  status  (called  prosta).   The  solution  and  problem  status  are  based  on
certicates   found  by  the  MOSEK  optimization  library
15
,   and  should  always   be  veried
before  the  returned  solution  values  are  used  (see  Section  2.2.5).
15
More  details  on  problem  and  solution  status  keys  available  at:
mosek.com  Documentation  Optimization  tools  manual  Symbolic  constants  reference.
36
2.3   Conic  Quadratic  Programming   A  GUIDED  TOUR
The  solution  also  contains   status   key  vectors   for   both  variables,   linear   constraints   and
conic  constraints  (called  skx,   skc  and  skn).   Each  status  key  will  be  one  of  the  following
two-character  strings.
BS   :   Basic
In   basic   (bas)   solutions:   The   constraint   or   variable   belongs   to   the   basis   of   the
corresponding  simplex  tableau.
SB   :   Super   Basic
In  interior-point   (itr)   and  integer   (int)   solutions:   The   constraint   or   variable   is   in
between  its  bounds.
LL   :   Lower   Level
The  constraint  or  variable  is  at  its  lower  bound.
UL   :   Upper   Level
The  constraint  or  variable  is  at  its  upper  bound.
EQ   :   Equality
The  constraint  or  variable  is  at  its  xed  value  (equal  lower  and  upper  bound).
UN   :   Unknown
The  status  of  the  constraint  or  variable  could  not  be  determined.
In addition to the primal variables just presented,  the returned solutions also contain dual
variables  not  shown  here.   The  dual   variables  can  be  used  for  sensitivity  analysis  of   the
problem  parameters  and  are  related  to  the  dual  problem  explained  in  Section  2.3.2.
37
2.3   Conic  Quadratic  Programming   A  GUIDED  TOUR
2.3.2   Duality
The  dual   problem  corresponding  to  the  primal   conic  quadratic  problem  (2.7)  dened  in
Section  2.3.1,  is  shown  below  in  (2.12).   Notice  that  the  coecients  of  the  dual  problem  is
the same as those used in  the primal problem.   Matrix A for example is still the constraint
matrix  of  the  primal  problem,  merely  transposed  in  the  dual  problem  formulation.
The  dual  problem  is  very  similar  to  that  of  linear  programming  (Section  2.2.2),  except
for  the  added  variable  s
x
n
  belonging  to  the  dual  cone  of (  given  by (
.
maximize   (l
c
)
T
s
c
l
 (u
c
)
T
s
c
u
 + (l
x
)
T
s
x
l
 (u
x
)
T
s
x
u
 + c
0
subject  to   A
T
(s
c
l
 s
c
u
) + s
x
l
 s
x
u
 + s
x
n
  =   c ,
s
c
l
, s
c
u
, s
x
l
 , s
x
u
  0 ,
s
x
n
  (
.
(2.12)
The  dual  cone (
always  has  the  same  number  of  variables  as  its  primal (,   and  is  in  fact
easily expressed  for  the three convex cone types supported  by  MOSEK. For the  R-cone its
dual  is  the  single  point  in  space  with  dual  variables  s
x
n
  all  zero,  such  that  (2.12)  coincides
with  the  dual  linear  program  (2.4),  when  all  primal  variables  belong  to  the  R-cone.
Just as easily it holds true that both the quadratic and rotated quadratic cones are self
dual  such  that (  = (
.   These  facts  ease  the  formulation  of  dual  conic  problems  and  have
been  exploited  by  the  MOSEK  optimization  library  for  fast  computations.
The  cqo1   Example  (Part  3  of   3)   The  part  of   the  solutions  to  the  cqo1   example
that  was  previously  omitted,   is  now  shown  in  Figure  19.   The  dual   variables  slc,   suc,   slx,
sux  and  snx  correspond  naturally  to  s
c
l
,  s
c
u
,  s
x
l
 ,  s
x
u
  and  s
x
n
  in  the  dual  problem.
Looking  at  the  denition  of  the  cqo1   problem  (2.10),   the  rst  and  only  constraint  is  an
equality  constraint  why  its  dual  variables  can  either  be  read  individually  from  its  implicit
lower  an  upper  bound,   slc  and  suc,   or  from  the  combined  lower  minus  upper  constraint
dual variable (slc  - suc).   Notice that since none of the primal variables have upper bounds,
the  corresponding  dual  variables  sux  are  all  xed  to  zeros.   Further  more,  since  all  primal
variables  belong  to  a  self-dual   quadratic  cone,   all   of   the  snx   variables  can  attain  values
from  the  corresponding  quadratic  cones.
38
2.3   Conic  Quadratic  Programming   A  GUIDED  TOUR
Figure  19:   Dual  Solution  (cqo1)
r$sol$itr
{
$solsta
[1]   "OPTIMAL"
$prosta
[1]   "PRIMAL_AND_DUAL_FEASIBLE"
...
$slc
[1]   0.7071068
$suc
[1]   0
$slx
[1]   2.582619e-09   2.582619e-09   3.845566e-09   0.000000e+00   0.000000e+00
[6]   0.000000e+00
$sux
[1]   0   0   0   0   0   0
$snx
[1]   -0.7071068   -0.7071068   -1.4142136   1.0000000   1.0000000   1.0000000
}
39
2.3   Conic  Quadratic  Programming   A  GUIDED  TOUR
2.3.3   Notes  on  Quadratic  Programming  (QP/QCQP)
Quadratic programming,  or more generally all-quadratic programming,  is a smaller subset
of conic quadratic programming (CQP) that can easily be handled and solved using modern
CQP  solvers.   The  primal   problem  of  a  quadratic  program  can  be  stated  as  shown  below
in  (2.13),   where  matrices  Q
0
,   Q
1
,   and  columns  a,   b  and  c  are  assumed  to  have  compliant
dimensions.   Notice that this problem is convex, and thereby easily solvable, only if Q
0
  and
Q
1
  are  both  positive  semidenite.
minimize   0.5 x
T
Q
0
x + c
T
x
subject  to   0.5 x
T
Q
1
x + a
T
x      b.
  (2.13)
While  quadratic  terms  are  added  directly  to  the  objective  function  and  constraint  matrix
in  quadratic  programming,  CQP  formulations  instead  retain  both  a  linear  objective  and  a
linear  constraint  matrix,   closely  resembling  a  linear  program.   All   non-linearities,   such  as
quadratic  terms,   are  in  this  case  formulated  by  themselves  using  quadratic  cones.   Many
types of non-linearities can be modeled using these quadratic cones as seen in Appendix B,
but  in  the  case  of   (2.13),  the  result  is  two  of  the  simplest  possible  rotated  quadratic  cones
-  one  for  the  x
T
Q
0
x  term  and  one  for  the  x
T
Q
1
x  term.
CQP formulations also have the advantage that there are no conditions on when a model is
convex  or  not,  because  it  always  is.   The  transformation  from  quadratic  programs  to  conic
quadratic  programs  is  based  on  a  matrix  factorization  requiring  Q
0
  and  Q
1
  to  be  positive
semidenite,   which  is   exactly  the  convexity  requirement   for   QCQPs.   This   means   that
x
T
Q
0
x  can  be  transformed  only  if   x
T
Q
0
x  is  convex.   In  some  sense,   this  transformation
can  be  seen  as  a  model-strengthening  preprocessing  step  that  only  has  to  be  done  once.
All  further  changes  to  the  model  can  of  course  be  made  directly  to  the  CQP.
The  transformation  of  a  quadratic  program,  whether  it  is  QP  or  QCQP,
to  a  conic  quadratic  program  (CQP),  can  be  seen  in  Appendix  B.1.
40
2.4   Semidenite  Programming   A  GUIDED  TOUR
2.4   Semidenite  Programming
2.4.1   Solving  SDP  problems
Semidenite programming
16
is a generalization of linear and conic quadratic programming.
Semidenite  programs  can  be  written  as  shown  below,   and  pose  the  properties  of   strong
duality  under  Slaters  condition.   This  is  the  primal  problem.
minimize   c
T
x +
p
j=1
C
j
, X
j
) + c
0
subject  to   l
c
   Ax +
_
p
j=1
A
1j
, X
j
)
.
.
.
p
j=1
A
mj
, X
j
)
_
_
   u
c
,
l
x
   x      u
x
,
x  (,   X
j
  o
+
r
j
for  j  = 1, . . . , p.
(2.14)
Cone (  is dened as in Section 2.3,  but could just be the set of reals x  R
n
.   The problem
has  p  symmetric  positive  semidenite  matrix  variables  X
j
  o
+
r
j
of  dimension  r
j
r
j
,  and
symmetric  coecients   C
j
  o
r
j
  and  A
mj
  o
r
j
.   We  use  the  standard  notation  for   the
matrix  inner  product,  i.e.,  for  U, V  R
mn
we  have
U, V ) =
m
i=1
n
j=1
U
ij
V
ij
.
The   sdo1   Example   (Part   1  of   3)   The   following  is   an  example   of   a  semidenite
optimization  problem  with  one  linear  objective  and  two  linear  constraints,   with  9  scalar
variables  from,  respectively,  a  quadratic  cone  and  a  semidenite  matrix:
minimize
_
_
_
2   1   0
1   2   1
0   1   2
_
_,   X
1
_
+ x
1
subject  to
_
_
_
1   0   0
0   1   0
0   0   1
_
_,   X
1
_
+ x
1
  =   1.0 ,
_
_
_
1   1   1
1   1   1
1   1   1
_
_,   X
1
_
+ x
2
 + x
3
  =   0.5 ,
x
1
 
_
x
2
2
 + x
2
3
 ,
X
1
 _ 0 .
(2.15)
16
Check   out:   mosek.com    Documentation     Optimization   tools   manual     Modeling     Conic
optimization.
41
2.4   Semidenite  Programming   A  GUIDED  TOUR
The  problem  above  corresponds  to  minimizing  the  objective:
2
_
[ X
1
 ]
11
 + [ X
1
 ]
22
 + [ X
1
 ]
33
_
+ 2
_
[ X
1
 ]
21
 + [ X
1
 ]
32
_
+ x
1
,
subject  to  the  linear  constraints
[ X
1
 ]
11
 + [ X
1
 ]
22
 + [ X
1
 ]
33
  +   x
1
  =   1.0 ,
[ X
1
 ]
11
 + [ X
1
 ]
22
 + [ X
1
 ]
33
 + 2
_
[ X
1
 ]
21
 + [ X
1
 ]
31
 + [ X
1
 ]
32
_
  +   x
2
 + x
3
  =   0.5 ,
and  can  be  modeled  in  R  as  shown.
Figure  20:   Semidenite  Optimization  (sdo1)
sdo1   <-   list(sense="min")
sdo1$c   <-   c(1,0,0)
sdo1$A   <-   Matrix(c(1,0,0,
0,1,1),   nrow=2,   byrow=TRUE ,   sparse=TRUE)
sdo1$bc   <-   rBind(blc   =   c(1,   0.5),   buc   =   c(1,   0.5))
sdo1$bx   <-   rBind(blx   =   rep(-Inf ,3),   bux   =   rep(Inf ,3))
sdo1$cones   <-   cBind(list("quad",   c(1 ,2 ,3)))
#   One   semidefinite   matrix   variable   size   3x3:
N   <-   3
sdo1$bardim   <-   c(N)
#   Block   triplet   format   specifying   the   lower   triangular   part
#   of   the   symmetric   coefficient   matrix   barc :
sdo1$barc$j   <-   c(1,   1,   1,   1,   1)
sdo1$barc$k   <-   c(1,   2,   3,   2,   3)
sdo1$barc$l   <-   c(1,   2,   3,   1,   2)
sdo1$barc$v   <-   c(2,   2,   2,   1,   1)
#   Block   triplet   format   specifying   the   lower   triangular   part
#   of   the   symmetric   coefficient   matrix   barA :
sdo1$barA$i   <-   c(1,   1,   1,   2,   2,   2,   2,   2,   2)
sdo1$barA$j   <-   c(1,   1,   1,   1,   1,   1,   1,   1,   1)
sdo1$barA$k   <-   c(1,   2,   3,   1,   2,   3,   2,   3,   3)
sdo1$barA$l   <-   c(1,   2,   3,   1,   2,   3,   1,   1,   2)
sdo1$barA$v   <-   c(1,   1,   1,   1,   1,   1,   1,   1,   1)
r   <-   mosek(sdo1)
barx   <-   1.0   *   bandSparse(N,   k=0:(1-N),   symm=TRUE)
barx@x   <-   r$sol$itr$barx [[1]]
42
2.4   Semidenite  Programming   A  GUIDED  TOUR
2.4.2   Duality
The dual problem corresponding to the primal semidenite problem (2.14), is shown below
in  (2.16).   Notice  that  the  coecients  of  the  dual  problem  is  the  same  as  those  used  in  the
primal problem.   Matrix A for example is still the constraint matrix of the primal problem,
merely  transposed  in  the  dual  problem  formulation.
The dual problem is very similar to that of conic quadratic programming (Section 2.12),
except  for  the  added  constraint  and  the  p  positive  semidenite  variables  S
j
.
maximize   (l
c
)
T
s
c
l
 (u
c
)
T
s
c
u
 + (l
x
)
T
s
x
l
 (u
x
)
T
s
x
u
 + c
0
subject  to   A
T
y + s
x
l
 s
x
u
 + s
x
n
  =   c ,
m
i=1
 y
i
A
ij
 + S
j
  =   C
j
  j  = 1, . . . , p,
y  = s
c
l
 s
c
u
  and  s
c
l
, s
c
u
, s
x
l
 , s
x
u
  0 ,
s
x
n
  (
,
S
j
  o
+
r
j
for  j  = 1, . . . , p .
(2.16)
43
2.5   Mixed  Integer  Programming   A  GUIDED  TOUR
2.5   Mixed  Integer  Programming
2.5.1   Solving  MILP  and  MICQP  problems
Mixed  Integer   Programming  is   a  common  and  very  useful   extension  to  both  linear   and
conic optimization problems, where one or more of the variables in the problem is required
only  to  attain  integer  values.   For  the  user  this  will   only  require  a  modest  change  to  the
model as these variables simply have to be pointed out, but to the underlying optimization
library  the  problem  increases  in  complexity  for  every  integer  variable  added
17
.
This   happens   because  mixed  integer   optimization  problems   have  to  be  solved  using
continuous   relaxations   and  branching  strategies   to  force   integrality.   Consequently,   the
running  time  of  the  optimization  process  will   be  highly  dependent  on  the  strength  of  the
continuous relaxation to the problem formulation - that is, how far from the optimal mixed
integer solution the problem formulation is when solved without the integrality requirement.
Some  suggestions  to  reduce  the  solution  time  are  thus:
  Relax the termination criterion:   In case the run time is not acceptable, the rst thing
to  do  is  to  relax  the  termination  criterion  (see  Section  2.5.3)
  Specify a good initial solution:   In many cases a good feasible solution is either known
or  easily  computed  using  problem  specic  knowledge.   If   a  good  feasible  solution  is
known, it is usually worthwhile to use this as a starting point for the integer optimizer.
  Improve  the  formulation:   A  mixed-integer  optimization  problem  may  be  impossible
to solve in one form and quite easy in another form.   However,  it is beyond the scope
of  this  manual  to  discuss  good  formulations  for  mixed  integer  problems
18
.
A change that the user will notice when starting to enforce integrality is that the notion of a
dual problem is no longer dened for the problem at hand.   This means that dual variables
will no longer be part of the solution to the optimization problem, and that only the primal
variables,  constraint  activities  and  problem/solution  status  reports,  can  be  expected  from
the  output  structure  returned  by  the  interface.
17
Check  out:   mosek.com  Documentation  Optimization  tools  manual  The  optimizer  for  mixed
integer  problems.
18
See  for  instance:   L.  A.  Wolsey,  Integer  programming,  John  Wiley  and  Sons,  1998
44
2.5   Mixed  Integer  Programming   A  GUIDED  TOUR
The  milo1   Example  (Part   1  of   2)   The  way  of   requiring  integer  variables  is  the
same  regardless  of  whether  we  solve  linear  or  conic  quadratic  problems.   The  following  is
an  example  of   a  simple  mixed  integer  linear  optimization  problem,   with  two  inequalities
and  two  non-negative  integer  variables:
maximize   x
1
 + 0.64x
2
subject  to   50x
1
 + 31x
2
     250,
3x
1
2x
2
     4,
x
1
, x
2
  0   and  integer.
(2.17)
This  is  easily  programmed  in  R  using  the  piece  code  shown  in  Figure  21,  where  x
1
  and  x
2
are  pointed  out  as  integer  variables.
Figure  21:   Mixed  Integer  Optimization  (milo1)
milo1   <-   list(sense   =   "max")
milo1$c   <-   c(1,   0.64)
milo1$A   <-   Matrix(c(   50,   31   ,
3,   -2   ),
nrow   =   2,
byrow   =   TRUE ,
sparse=   TRUE)
milo1$bc   <-
rbind(blc   =   c(-Inf ,-4),
buc   =   c(250,Inf))
milo1$bx   <-
rbind(blx   =   c(0,0),
bux   =   c(Inf ,Inf))
milo1$intsub   <-   c(1,2)
r   <-   mosek(milo1)
Figure  22:   Output  object  (milo1)
r$sol$int
{
$solsta
[1]   "INTEGER_OPTIMAL"
$prosta
[1]   "PRIMAL_FEASIBLE"
$skc
[1]   "UL"   "SB"
$skx
[1]   "SB"   "LL"
$xc
[1]   250   15
$xx
[1]   5   0
}
The   input   arguments   follow  those   of   a   linear   or   conic   program  with   the   additional
identication  of  the  integer  variables  (refer  to  Figure  2).   The  column  vector  intsub  should
simply  contain  indexes   to  the  subset   of   variables   for   which  integrality  is   required.   For
instance if x should be a binary {0,1}-variable, its index in the problem formulation should
be  added  to  intsub,  and  its  bounds  0  x  1  should  be  specied  explicitly.
If   executed  correctly  you  should  be  able  to  see  the  log  of   the  interface  and  optimization
process  printed  to  the  screen.   The  output  structure  shown  in  Figure  22,  will  only  include
45
2.5   Mixed  Integer  Programming   A  GUIDED  TOUR
an  integer   solution  int,   since  we  are  no  longer   in  the  continuous   domain  for   which  the
interior-point  algorithm  operates.   The  structure  also  contains  the  problem  status  as  well
as  the  solution  status  based  on  certicates  found  by  the  MOSEK  optimization  library
19
.
2.5.2   Hot-starting
Hot-starting  (also  known  as   warm-starting)   is   a  way  to  make  the  optimization  process
aware   of   a   feasible   point   in  the   solution  space   which,   depending   on  the   quality   of   it
(closeness   to  the   optimal   solution),   can  increase   the   speed  of   optimization.   In  mixed
integer  programming  there  are  many  ways  to  exploit  a  feasible  solution  and  for  anything
but  small  sized  problems,  it  can  only  be  recommended  to  let  the  optimizer  know  about  a
feasible  solution  if  one  is  available.
For  many  users  the  main  advantage  of   hot-starting  a  mixed  integer  program  will   be  the
increased performance, but others may also nd appreciation for the fact that the returned
solution  can  only  be  better   (or   in  worst-case  the  same)   as   the  solution  fed  in.   This   is
important in many applications where infeasible or low-quality solutions are not acceptable
even  when  time  is   short.   Heuristics   are  thus   combined  with  hot-started  mixed  integer
programs  to  yield  a  more  reliable  tool  of  optimization.
The milo1 Example (Part 2 of 2)   For a small problem like the previously introduced
milo1,   that  can  be  solved  to  optimality  without  branching,   it  is  not  really  useful   to  hot-
start.   This  example  nevertheless  illustrates  the  principle  of  how  it  can  be  done.
Figure  23:   Hot-starting  from  initial  guess  (milo1)
#   Define   milo1    (not   shown)
#   Try   to   guess   the   optimal   solution
myint   <-   list()
myint$xx   <-   c(5.0,   0.0)
#   Hot -start   the   mixed   integer   optimizer
milo1$sol   <-   list(int=myint)
r   <-   mosek(milo1)
19
More  details  on  problem  and  solution  status  keys  available  at:
mosek.com  Documentation  Optimization  tools  manual  Symbolic  constants  reference.
46
2.5   Mixed  Integer  Programming   A  GUIDED  TOUR
2.5.3   Termination  criteria
A  candidate  solution,  i.e.   a  solution  to  the  continuously  relaxed  mixed  integer  problem,  is
said  to  be  integer  feasible  if  the  criterion
min
_
[x
j
[ 
_
[x
j
[
_
,
_
[x
j
[
_
[x
j
[
_
    max (
1
, 
2
[x
j
[)   (2.18)
is  satised  for  all   variables  in  the  problem,   j   J.   Hence,   such  a  solution  is  dened  as
feasible  to  the  mixed  integer  problem.   Whenever  the  optimizer  locates  an  integer  feasible
solution  it  will  check  if  the  criterion
[z z[     max (
3
, 
4
 max (1, [z[))   (2.19)
is  satised.   If   this  is  the  case,   the  integer  optimizer  terminates  and  reports  the  integer
feasible  solution  as  an  optimal   solution.   Please  note  that  z  is  a  valid  upper  bound,   and
z   a  valid  lower   bound,   to  the  optimal   objective  value  z
.   In  minimization  problems,   z
normally  increases  gradually  during  the  solution  process,   while  z   decreases   each  time  a
better integer solution is located starting at the warm-started or heuristically preprocessor
generated  solution  value.
The    tolerances  are  specied  using  parameters  and  have  default  values  as  shown  below.
If  an  optimal   solution  can  not  be  located  within  a  reasonable  amount  of  time,   it  may  be
advantageous  to  employ  a  relaxed  termination  criterion  after  some  time.   Whenever  the
integer  optimizer  locates  an  integer  feasible  solution,   and  has  spent  at  least  the  number
of   seconds   dened  by  the   double-typed  parameter   MIO_DISABLE_TERM_TIME  on
solving  the  problem,  it  will  check  whether  the  criterion
Tolerance   Parameter  name  (type:   dparam)   Default  value
1
  MIO_TOL_ABS_RELAX_INT   10
5
2
  MIO_TOL_REL_RELAX_INT   10
6
3
  MIO_TOL_ABS_GAP   0
4
  MIO_TOL_REL_GAP   10
4
5
  MIO_NEAR_TOL_ABS_GAP   0
6
  MIO_NEAR_TOL_REL_GAP   10
3
Table  2.20:   Integer  optimizer  tolerances.
Default  values  are  subject  to  change  from  MOSEK  v7.0  shown  here.
47
2.5   Mixed  Integer  Programming   A  GUIDED  TOUR
[z z[     max (
5
, 
6
 max (1, [z[))   (2.21)
is  satised.   If  it  is  satised,   the  optimizer  will  report  that  the  candidate  solution  is  near
optimal   and  then  terminate.   Please  note  that   the  relaxed  termination  criterion  is   not
active  by  default,  as  the  delay  (MIO_DISABLE_TERM_TIME)  is  -1  by  default.
Given  the   delay  (MIO_DISABLE_TERM_TIME)   is   dierent   from  -1,   the   integer
optimizer also oers two additional parameters to stop the optimizer prematurely once the
specied delay have passed.   Beware that these parameters, in the table below, are not able
to  guarantee  any  kind  of  near  optimality,   and  with  a  default  value  of  -1  they  are  ignored
until  explicitly  dened.
Parameter  name  (type:   iparam)   Maximum  number  of  ...
MIO_MAX_NUM_BRANCHES   Branches  allowed.
MIO_MAX_NUM_RELAX   Relaxations  allowed.
Another  option  is  to  terminate  whenever  the  number  of   distinct  integer  feasible  solution
reaches   the   value   of   the   parameter   MIO_MAX_NUM_SOLUTIONS.   Note   that   this
criteria  does  not  depend  on  the  delay  (MIO_DISABLE_TERM_TIME).
Parameter  name  (type:   iparam)   Maximum  number  of  ...
MIO_MAX_NUM_SOLUTIONS   Integer  feasible  solutions  allowed.
Finally,   if   all   other   termination  criteria  fail   to  be  satised,   the  double-typed  parameter
OPTIMIZER_MAX_TIME  can  be  used  to  dene  the  maximum  number   of   seconds   to
spend  before  terminating  with  the  best  integer  feasible  solution  (if  any).   See  Section  2.6
on  how  to  specify  all  these  parameters.
48
2.6   Parameter  settings  in  MOSEK   A  GUIDED  TOUR
2.6   Parameter  settings  in  MOSEK
The  MOSEK  optimization  library  oers  a  lot  of  customization  for  the  user  to  be  able  to
control   the  behavior  of  the  optimization  process  or  the  returned  output  information.   All
parameters  have  been  documented  on  the  homepage  of  MOSEK
20
,   and  are  all   supported
by  this  interface.   Only  a  few  is  mentioned  here.
Notice   that   the   MSK_   prex,   required  by   the   MOSEK  C  API,   can  be   ignored
for   string-typed   parameter   values   in   this   interface.   Similarly   the   MSK_IPAR_,
MSK_DPAR_  and  MSK_SPAR_  prexes  for  parameter  names,  have  been  removed
in  favor  of  the  iparam,   dparam  and  sparam  structures.   Also  in  this  interface,   parameters
are  case-insensitive.   This  means  that  parameter  names  and  string-typed  parameter  values
can  be  written  in  both  upper  and  lower  case.
The  LOG-Parameter  Example   This  is  a  logging  parameter  controlling  the  amount
of   information  printed  to  the  screen  from  all   channels  within  the  MOSEK  optimization
library.   The  value  of   the  parameter  can  be  set  to  any  integer  between  0  (suppressing  all
information)  to  the  R  value  Inf   (releasing  all  information),  and  the  default  is  10.
Revisiting   the   lo1   example   from  Section   2.2.1,   we   can   now  try   to   silence   the
optimization  process  as  shown  below.
Figure  24:   Suppressing  the  optimization  log  with  parameter  LOG
lo1   <-   list(sense   =   "max")
lo1$c   <-   c(3,1,5,1)
lo1$A   <-   Matrix(c(3,1,2,0,
2,1,3,1,
0,2,0,3),   nrow=3,   byrow=TRUE ,   sparse=TRUE)
blc   <-   c(30,15,-Inf)
buc   <-   c(30,Inf ,25);   lo1$bc   <-   rbind(blc ,   buc);
blx   <-   c(30,15,-Inf)
bux   <-   c(30,Inf ,25);   lo1$bx   <-   rbind(blx ,   bux);
lo1$iparam   <-   list(LOG   =   0);
r   <-   mosek(lo1);
Notice   that   errors   and  warnings   from  the   interface   itself   will   not   be   aected  by   this
parameter.   These  are  controlled  separately  by  the  option  verbose.
20
Check  out:   mosek.com  Documentation  C  API  manual  Parameter  reference.
49
2.6   Parameter  settings  in  MOSEK   A  GUIDED  TOUR
Adding  parameters  to  the  input  arguments  of  a  convex  optimization  problem  is,   as  seen,
straightforward  (refer   to  Figure  2).   The  list   iparam  should  specify  its   variables   with  a
name,  matching  that  of  an  integer-typed  parameter,  and  a  value,  being  the  corresponding
parameter  value.   Similarly  the  list  dparam  is  for  double-typed  parameters,   and  sparam
is  for  string-typed  parameters.   Notice  that  reference-strings  to  an  enum  (a  small   set  of
named  possibilities)  is  also  integer-typed,  as  the  input  reference-strings  are  automatically
converted  to  indexes  by  the  interface.
The  OPTIMIZER-Parameter  Example   This  parameter  controls  which  optimizer
used to solve a specic problem.   The default value is OPTIMIZER_FREE meaning that
MOSEK  will  try  to  gure  out  what  optimizer  to  use  on  its  own.
In  this  example  we  shall  try  to  solve  the  lo1  example  from  Section  2.2.1  again,  only  this
time  using  the  dual   simplex  method.   This   is   specied  by  setting  the  parameter   to  the
enum  reference-string  value  OPTIMIZER_DUAL_SIMPLEX  as  shown.
Figure  25:   Selecting  the  dual  simplex  method  with  parameter  OPTIMIZER
lo1   <-   list(sense   =   "max")
lo1$c   <-   c(3,1,5,1)
lo1$A   <-   Matrix(c(3,1,2,0,
2,1,3,1,
0,2,0,3),   nrow=3,   byrow=TRUE ,   sparse=TRUE)
blc   <-   c(30,15,-Inf)
buc   <-   c(30,Inf ,25);   lo1$bc   <-   rbind(blc ,   buc);
blx   <-   c(30,15,-Inf)
bux   <-   c(30,Inf ,25);   lo1$bx   <-   rbind(blx ,   bux);
lo1$iparam   <-   list(OPTIMIZER   =   "OPTIMIZER_DUAL_SIMPLEX ");
r   <-   mosek(lo1)
Setting  a  parameter   to  NULL  will   remove   it   from  the   list   according  to  the   R  language
denition.   Setting  a  parameter  to  NA  or  NaN,   will   on  the  other  hand  keep  it  on  the  list,
only  to  be  ignored  by  the  interface  with  warnings  conrming  that  this  took  place.   Errors
will  be  generated  when  a  parameter  name  is  not  recognized  or  when  the  value  dened  for
it  is  not  within  its  feasible  range.
50
2.7   Exporting  and  Importing  optimization  models   A  GUIDED  TOUR
2.7   Exporting  and  Importing  optimization  models
This   section   concerns   the   export   and   import   of   optimization   models   in   all   standard
modeling  formats  (e.g.   lp,   mps   and  opf)  and  the  MOSEK  Task  format  task.   These
les  can  be  converted  to  and  from  the  problem  descriptions  (R-variables)  that  the  mosek
function  can  accept  (refer  to  Figure  2).
lp   mps   opf   task
Linear  problem            
Conic  constraint         
Semidenite  variable   
Integer  variable            
MOSEK  Parameter         
Solution      
Figure  26:   Supported  le  formats  recognized  by  their  le  extension.
As seen in Figure 26, some of these modeling formats have more limitations than others.
The  lp   format  only  support  linear  problems  and  integer  variables.   The  mps   format  is
more general with support of conic constraints and vendor-specic parameter settings.   The
opf   format  further  allow  solutions  to  be  dened,   but  only  the  task   format  handles  the
full  problem  description  with  semidenite  variables.
In   all   formats   supporting   parameter   settings,   either   none   or   the   entire   list   of   all
parameters  in  MOSEK  are  exported.   This  will   lead  to  large  problem  descriptions  when
parameters are imported from a le.   To save and reload the constructed R-variables exactly
as  they  are,  please  use  the  save  and  load  commands  part  of  the  R  language  denition.
2.7.1   The  write  and  read  functionality
  Exporting  through  the  mosek_write  function
A problem description can be exported with the mosek_write command taking the problem
variable  and  destination  as   input,   and  returning  a  response  that   can  be  used  to  verify
success.   The  destination  should  be  a  lepath  with  a  le  extension  matching  the  modeling
format  to  be  used.   If  the  le  extension  is  not  recognized  as  one  of  the  supported  modeling
formats,   the  mps   format   is   used  by  default.   Note,   however,   that   this   only  aects   the
contents  of  the  le  and  does  not  change  the  le  extension  to  mps.   The  specied  lepath
is  overwritten  if  it  already  exists,  and  otherwise  created.
51
2.7   Exporting  and  Importing  optimization  models   A  GUIDED  TOUR
Exporting  the  lo1  Example   Revisiting  the  lo1  example  from  Section  2.2.1,  we  can
now  try  to  export  the  optimization  model  to  the  opf  format.   In  the  current  example  we
export  the  problem  description  to  a  le  called  lo1.opf   in  the  current  working  directory.
Beware  that   this   le  is   overwritten  if   it   already  exists!   Afterwards,   using  the  standard
file.show  command,  the  le  is  displayed  in  the  R-console.
Figure  27:   Exporting  a  Linear  Optimization  Problem
lo1   <-   list(sense   =   "max")
lo1$c   <-   c(3,1,5,1)
lo1$A   <-   Matrix(c(3,1,2,0,
2,1,3,1,
0,2,0,3),   nrow=3,   byrow=TRUE ,   sparse=TRUE)
blc   <-   c(30,15,-Inf)
buc   <-   c(30,Inf ,25);   lo1$bc   <-   rbind(blc ,   buc);
blx   <-   c(0,0,0,0)
bux   <-   c(Inf ,10,Inf ,Inf);   lo1$bx   <-   rbind(blx ,   bux);
r   <-   mosek_write(lo1 ,   "lo1.opf")
file.show("lo1.opf")
In the example we did not have either parameters or initial solutions dened in the problem
description,   but  if   we  had,   the  options  usesol  and  useparam  could  be  used  to  indicate
whether   to  write  them  to  the  le  or   not.   By  default,   usesol  is   true  and  useparam  is
false.   Parameters  are  thus  usually  omitted  in  exported  models.   Note  that  since  it  does
not make sense for an initial solution to specify a solution or problem status,  the solutions
exported  as  part  of  a  problem  description  will  always  have  unknown  statuses.   The  export
functionality  in  the  mosek  function  behaves  dierently  in  this  regard  (see  Section  2.7.2)  .
  Importing  through  the  mosek_read  function
A problem description can be imported with the mosek_read command taking the lepath
as  input.   If  the  le  extension  does  not  match  one  of  the  supported  modeling  formats,  the
mps   format  is  assumed  and  errors  are  given  if   the  contents  of   the  le  does  not  respect
this  syntax.
52
2.7   Exporting  and  Importing  optimization  models   A  GUIDED  TOUR
Importing  the   lo1   Example   (Part   1  of   2)   We  will   now  try  to  import   the  lo1
optimization  model   that  was  exported  in  Figure  27.   In  the  current  example  we  import
from a le called  lo1.opf in the current  working directory.   Notice that  we use the options
usesol  and  useparam  to  indicate  that  we  do  not  wish  to  read  solutions  from  the  le,  but
are  interested  in  the  parameter  settings.   Checking  that  the  response  code  is  ne,  we  print
a  message  and  assign  the  lo1   variable.   Please  note  the  size  of  the  elds  iparam,   dparam
and  sparam  of  the  imported  problem  description.
Figure  28:   Importing  a  Linear  Optimization  Problem
r   <-   mosek_read ("lo1.opf",   list(usesol=FALSE ,   useparam=TRUE))
if   (identical(r$response$code ,   0))   {
print (" Successfully   read   the   optimization   model ")
lo1   <-   r$prob
}
The  options  usesol  and  useparam  behaves  as  for  the  mosek_write  function,   and  can  be
used  to  specify  whether   to  read  the  solutions  and  parameter   settings  from  the  le.   By
default,  usesol  is  true  and  useparam  is  false.   Note  that  for  all  parameters  not  dened  in
the  le,   the  default  value  in  the  MOSEK  optimization  library  is  assigned  instead.   This
means  that  the  returned  problem  description  will   always  be  quite  large  when  parameters
are  imported.
Another  option  of  special  interest  is  the  string-typed  matrixformat  which  can  specify  the
format of the imported sparse constraint matrix called A in Figure 2.   The recognized values
for  this  option  are  shown  below  with  pkgMatrix:COO  (abbreviated  COO)  being  the  default.
All  string  values  are  case  insensitive.
COO /   pkgMatrix:COO
The  sparse  coordinate  format  (aka  triplet  format)  from  package  Matrix.
CSC /   pkgMatrix:CSC
The  compressed  sparse  column  format  from  package  Matrix.
simple:COO
The  sparse  coordinate  format  (aka  triplet  format)  based  on  list-objects.
When  setting  matrixformat  to  the  string-value  simple:COO,   the  constraint   matrix  will
be  a  named  list  with  scalars   nrow  and  ncol   telling  respective  the  number  of   rows  and
columns  of  matrix  A,  and  lists  i,   j   and  v  telling  the  ij-coordinates  of  each  value  in  v.   All
these  matrix  formats  are  allowed  in  the  problem  description  given  as  input  to  the  mosek
and  mosek_write  functions.
53
2.7   Exporting  and  Importing  optimization  models   A  GUIDED  TOUR
Importing  the  lo1   Example  (Part   2  of   2)   We  will   again  try  to  import  the  lo1
optimization  model that was exported  in  Figure 27,  only  this time ignoring all  parameters
and  with  the  matrixformat  option  set  to  simple:COO.   This  is  shown  in  Figure  29  where
a  le  called  lo1.opf  is  imported  from  the  current  working  directory.   The  structure  of  the
imported  constraint  matrix  is  shown  in  Figure  30.
Figure  29:   Importing  a  Linear  Optimization  Problem  with  simple:COO
r   <-   mosek_read ("lo1.opf",   list(matrixformat =" simple:COO "))
Figure  30:   Importing  a  Linear  Optimization  Problem  with  simple:COO
r$prob$A
{
$nrow
[1]   3
$ncol
[1]   4
$i
[1]   1   2   1   2   3   1   2   2   3
$j
[1]   1   1   2   2   2   3   3   4   4
$v
[1]   3   2   1   1   2   2   3   1   3
}
60
3.2   Quadratic  Convex  Optimization   ADVANCED  TOPICS
3.2   Quadratic  Convex  Optimization
3.2.1   Solving  QP  problems
Quadratic   Convex   Optimization   is   a   generalization   of   linear   programming   for   which
quadratic   terms   can  be   added  to  the   objective   function.   Convex   here   emphasize   that
the  entire  formulation  should  dene  a  convex  optimization  problem.
minimize   0.5x
T
Qx + c
T
x + c
0
subject  to   l
c
   Ax      u
c
,
l
x
   x      u
x
.
(3.3)
Two  facts  should  be  noted,  before  engaging  in  quadratic  convex  optimization:
  The problem is convex only if Q is positive semidenite (all eigenvalues non-negative),
which can be numerically hard to verify for large Q with zero or near-zero eigenvalues.
  The  MOSEK  optimization  library  does   not   support   the  combination  of   quadratic
convex  optimization  with  conic  constraints.
If  the  problem  at  hand  involves  quadratic  objective  terms,  quadratic  convex  optimization
may very well be the best way to solve it.   Nevertheless, it is easy to try the conic quadratic
formulation of these problems, for which the optimizer will sometimes perform even better
(see  Section  2.3.3).
The  qo1   Example   The  following  is  an  example  of  a  linear  constrained  problem  with
a  convex  quadratic  objective.
minimize   0.5
_
2x
2
1
2x
3
x
1
 + 0.2x
2
2
 + 2x
2
3
_
x
2
subject  to   x
1
 + x
2
 + x
3
     1 ,
x
1
, x
2
, x
3
  0 .
(3.4)
The  quadratic  objective  terms  are  easily  appended  on  top  of  the  linear  formulation  as
shown  in  the  following  R  code  (Figure  33).
61
3.2   Quadratic  Convex  Optimization   ADVANCED  TOPICS
Figure  33:   Quadratic  Convex  Optimization  (qo1)
qo1   <-   list()
qo1$sense   <-   "min"
qo1$c   <-   c(0,-1,0)
qo1$A   <-   Matrix(c(1,1,1),   nrow=1,   byrow=TRUE ,   sparse=TRUE)
qo1$bc   <-   rbind(blc   =   1,
buc   =   Inf)
qo1$bx   <-   rbind(blx   =   rep(0,3),
bux   =   rep(Inf ,3))
qo1$qobj   <-   list(i   =   c(1,   3,   2,   3),
j   =   c(1,   1,   2,   3),
v   =   c(2,   -1,   0.2,   2))
r   <-   mosek(qo1)
The objective term 2x
3
x
1
  has been specied with half the coecient, i.e. 1, in the eld
qobj.   Note that  this  is  in  contrary  to  squared  variables,  e.g.   2x
2
1
,  as  explained  in  the  text.
From this example the input arguments for a quadratic convex program  (3.3) follow easily
(refer  to  Figure  2,  Section  2.1).   The  linear  part  of  the  objective  function  and  constraints,
as well as the constraint and variable bounds, should all be specied as for linear programs
(see  Section  2.2).   The  only  addition  to  this  is  the  list  called  qobj   containing  the  triplets
of  row  indexes  i,   column  indexes  j  and  values  v  that  dene  Q.   The  number  of  rows  and
columns  of  matrix  Q  should  not  be  specied,  as  these  dimensions  are  already  known  from
the  number  of  variables  in  the  linear  part  of  the  problem.
The  inputted  Q  is   always   assumed  symmetric,   and  note  that   this   is   without   loss  of
generality.   This is ecient as only the lower triangular part of Q thus needs to be specied.
Setting a coecient at the lower triangular position (i,j) will, by symmetry, imply that the
same coecient is also present at position (j,i).   In order to add the objective term 2x
3
x
1
,
one  would  see  it  as x
3
x
1
x
1
x
3
  and  only  input  a  coecient  of 1  to  row  3  of  column  1.
62
3.3   Separable  Convex  Optimization   ADVANCED  TOPICS
3.3   Separable  Convex  Optimization
3.3.1   Solving  SCOPT  problems
Separable   Convex   Optimization   is   a   generalization   of   linear   programming   for   which
arbitrary unary operators can be added to the objective function and constraints.   Separable
means that each operator should depend on one variable only, and Convex  means that the
entire  formulation  should  dene  a  convex  optimization  problem.
minimize   z(x) + c
T
x + c
0
subject  to   l
c
   w(x) + Ax      u
c
,
l
x
   x      u
x
,
(3.5)
where
z(x)   =
n
j=1
z
j
(x
j
)   ,   z
j
  :   R R.
w(x)   =
_
_
_
.
.
.
w
i
(x)
.
.
.
_
_
_ ,   w
i
(x)   =
n
j=1
w
ij
(x
j
)   ,   w
ij
  :   R R.
(3.6)
Three  facts  should  be  noted,  before  engaging  in  separable  convex  optimization:
  Since   the   operators   z
j
(x
j
)   and  w
ij
(x
j
)   are   arbitrarily   dened,   the   optimizer   for
separable  convex  optimization  has  less  knowledge,   and  can  be  expected  to  perform
worse,  than  the  optimizer  for  conic  quadratic  programs.
  No  standard  modeling  le-format  exists  for  separable  convex  optimization.
(MOSEK  has  its  own  sco  format  dened  for  a  subset  of  operators)
  The  MOSEK  optimization  library  does   not   support   the  combination  of   separable
convex  optimization  with  integer  variables  or  conic  constraints.
If   the  problem  at   hand  involves   exponential   or   logarithmic  functions,   separable  convex
optimization  may  be  a  good  and  ecient  choice.   Note,  however,  that  undened  behaviour
can  be  observed  if   the  problem  is   not   convex  (MOSEK  will   try  to  detect   this).   If   the
problem  does  not  involve  involves  exponential   or  logarithmic  functions,   it  may  very  well
be possible to reformulate the optimization problem as a conic quadratic program (see e.g.
Section  2.3  and  Appendix  B).
63
3.3   Separable  Convex  Optimization   ADVANCED  TOPICS
For the sake of performance, and the usability of the sco le-format, callback functionality
to custom R evaluation code have not been supported.   Instead, a list of common operators
with  customizable  coecients  have  been  added  to  this  interface,   making  it  both  fast  and
easy  to  use.   The  reader  is  encouraged  to  contact  us,  in  case  this  list  of  operators  is  found
to  be  insucient.
With  constants  f  R,  g  R  and  h  R,  the  interface  currently  allows:
Operator   Denition   Domain
Entropy   fxln(x)   0 < x < 
Exponential   f exp(gx + h)   < x < 
Logarithm   f ln(gx + h)   If  g  > 0:   h/g  < x < 
Otherwise:   < x < h/g
Power   f(x + h)
g
If  g  > 0  and  integer: < x < 
If  g  < 0  and  integer:   choose  this   h < x < 
or  this < x < h
Otherwise: h < x < 
Figure  27:   Supported  unary  operators.
The   sco1   Example   The   following   is   an   example   of   a   problem  with   a   non-linear
objective  optimized  over  a  linear  constraint  and  the  interior  of  a  unit  circle.
minimize   x
1
ln(x
3
)
subject  to   x
2
1
 + x
2
2
     1 ,
x
1
 + 2x
2
x
3
  =   0 ,
x
3
  0 .
(3.7)
The objective contains one logarithm operator, and the rst constraint contains two power
operators.   These  operators  are  easily  appended  on  top  of  the  linear  formulation  as  shown
in  the  following  R  code  (Figure  34).
64
3.3   Separable  Convex  Optimization   ADVANCED  TOPICS
Figure  34:   Separable  Convex  Optimization  (sco1)
sco1   <-   list(sense   =   "min")
sco1$c   <-   c(1,0,0)
sco1$A   <-   Matrix(c(0,   0,   0,
1,   2,   -1),   nrow=2,   byrow=TRUE ,   sparse=TRUE)
sco1$bc   <-   rbind(blc   =   c(-Inf ,0),
buc   =   c(1,0))
sco1$bx   <-   rbind(blx   =   c(-Inf ,-Inf ,0),
bux   =   rep(Inf ,3))
NUMOPRO   <-   1;   NUMOPRC   <-   2;
opro   <-   matrix(list(),   nrow=5,   ncol=NUMOPRO)
oprc   <-   matrix(list(),   nrow=6,   ncol=NUMOPRC)
rownames(opro)   <-   c("type","j","f","g","h")
rownames(oprc)   <-   c("type","i","j","f","g","h")
opro[,1]   <-   list("LOG",   3,   -1.0,   1.0,   0.0)
oprc[,1]   <-   list("POW",   1,   1,   1.0,   2.0,   0.0)
oprc[,2]   <-   list("POW",   1,   2,   1.0,   2.0,   0.0)
sco1$scopt   <-   list(opro=opro ,   oprc=oprc)
r   <-   mosek(sco1)
From  this  example  the  input  arguments  for  a  separable  convex  program  (3.5)  follow  easily
(refer  to  Figure  2,  Section  2.1).   The  linear  part  of  the  objective  function  and  constraints,
as well as the constraint and variable bounds, should all be specied as for linear programs
(see Section 2.2).   The only addition to this is the list called scopt  containing the list-typed
operator  matrices  opro  (for  objective)  and  oprc  (for  constraints).
The  operator  matrices  have  a  column  for  each  operator  and  a  row  for  each  descriptive
element.   The opro matrix have ve rows called type, j, f, g, h, while the oprc matrix have
six  rows  called type, i, j, f, g, h.   Row  type  should  specify  the  operator  type  in  a  string,
being  either  entropy  "ENT",  exponential  "EXP",  logarithm  "LOG"  or  power  "POW".   Row
i   (not   in  opro)   should  specify  the  index  of   the  constraint   to  which  the  non-linear   term
should  be  added.   Row  j   should  specify  the  variable  index  of  the  operator.   Rows  f ,  g  and
h  should  specify  the  coecients  of  the  operator  (see  Figure  27).
Examples  are:
Terms  in  opro   type   j   f   g   h
In  objective:   exp(1.4x
1
 + 1.1)   "EXP"   1   1.0   1.4   1.1
In  objective:   ln(1.5x
2
 + 1.2)   "LOG"   2   1.0   1.5   1.2
In  objective:   2.1
x
3
  "POW"   3   2.1   0.5   0.0
65
3.3   Separable  Convex  Optimization   ADVANCED  TOPICS
Terms  in  oprc   type   i   j   f   g   h
In  constraint  one:   0.1x
1
 ln(x
1
)   "ENT"   1   1   0.1   NA   NA
In  constraint  two:   (x
1
 + 2.0)
1.75
"POW"   2   1   1.0   1.75   2.0
Note  that  the  denition  of  the  entropy  operator  (see  Figure 27),  was  the  only  operator
dened without g and h.   Thus, for entropy operators, these two unused rows in the operator
matrix  can  be  set  to  either  zero  or  any  empty  denition  (NULL,  NA  or  NaN).
3.3.2   Convexity,  dierentiability  and  safe  bounds
  Ensuring  convexity
The  feasible  solution  space  of   a  separable  convex  optimization  problem  is  required  to  be
convex.   If  a  simple  set  of  convexity-preserving  rules  are  complied  with,   the  problem  may
further be called disciplined convex.   Disciplined convexity is not required in this interface,
but  will  be  checked  to  help  the  user  identify  modeling  issues.
Disciplined  convexity  means  that  the  expression  g
i
(x)  of   a  constraint  l
c
i
   g
i
(x)   u
c
i
,   is
required to be concave if the lower bound is nite (i.e.   not -Inf), and required to be convex
if  the  upper  bound  is  nite  (i.e.   not  Inf).   This  also  implies  that  the  expression  should  be
linear  if  both  the  lower  and  upper  bound  is  dened  with  nite  values.
In  case  of   separable  convex  optimization,   this  means  that  each  unary  operator  of   an
expression  -   required  to  be   convex  (resp.   concave)   -   is   used  only  in  its   convex  (resp.
concave)  domain.   A  problem  which  is  disciplined  convex  is  guaranteed  to  be  convex.
Operator   Denition   Convexity  (domains  dened  in  Figure  27)
Entropy   fxln(x)   Convex  in  domain.
Exponential   f exp(gx + h)   Convex  in  domain.
Logarithm   f ln(gx + h)   Concave  in  domain.
Power   f(x + h)
g
If  g  is  even  integer:   convex  in  domain.
If  g  is  odd  integer:   concave  (x < h),  convex  (h < x)
If  0 < g  < 1:   concave  in  domain.
Otherwise:   convex  in  domain.
Figure  28:   Convexity  of  unary  operators  assuming  f  > 0.
Convexities  are  exactly  opposite  for  f  < 0.
66
3.3   Separable  Convex  Optimization   ADVANCED  TOPICS
  Ensuring  dierentiability  with  safe  bounds
Figure 27 displayed the domains in which each operator was well-dened and dierentiable.
If  the  MOSEK  optimizer  for  some  reason  tried  to  evaluate  one  of  these  operators  outside
of  its  domain,  an  error  would  occur  and  the  optimization  would  terminate.   Since  MOSEK
may  not  be  able  to  guess  an  initial  feasible  solution  to  the  problem,   it  may  actually  have
to  work  its  way  through  the  infeasible  region  of  the  problem.
Essentially,   operators   may  be   evaluated  in  points   where   the   constraint   bounds   are
not   satised.   To  avoid  causing  the   optimizer   to  terminate   due   to  evaluations   outside
operator   domains,   variable   bounds   should  be   specied  such  that   operator   domains   are
always  satised.   This  is  called  safe  bounds,  and  will  work  because  the  MOSEK  optimizer
is  guaranteed  to  satisfy  variable  bounds  in  each  iteration.
3.3.3   Importing  and  exporting  modelles
In  case   the   problem  description  contains   separable   convex   operators,   the   string-typed
option  scofile  must  be  used  to  specify  the  lepath  to  which  these  operators  should  be
written.   Failing  to  specify  this   option  when  calling  mosek_write  (see  Section  2.7)   will
result  in  an  error.   It  is  recommended  to  use  the  sco  le  extension,  but  the  interface  will
allow  any  (even  a  missing  one)  to  be  used  without  warning.
Exporting  the  sco1   Example   Revisiting  the  sco1   example  from  Section  3.3.1,   we
can now try to export the optimization model to the opf format with operators in an sco
le.   In  particular,  the  current  example  will  export  the  problem  description  to  respectively
sco1.opf   and  sco1.sco   in  the   current   working   directory.   Beware   that   these   les   are
overwritten  if   they  already  exists!   Afterwards,   using  the  standard  file.show  command,
the  les  are  displayed  in  the  R-console.
Figure  35:   Exporting  a  Separable  Convex  Optimization  Problem
#   Define   sco1    (not   shown)
r   <-   mosek_write(sco1 ,   "sco1.opf",   list(scofile ="sco1.sco"))
file.show("sco1.opf",   "sco1.sco")
The  same  option,   scole,   can  be  used  when  calling  mosek_read  (see  Section  2.7).   This
function will read in a problem description of a separable convex optimization problem from
the  location  of  two  les.   These  two  les  can  also  be  read  independently  of  each  other.
67
DEVELOPERS  CORNER
4   Developers  corner
4.1   Introduction
In the projects src  folder you can nd the header and source les (*.h  and *.cc) containing
the  C++  code  of   the  R-to-MOSEK  interface.   Moreover,   the  Rmosek.cc  le  contains  the
functions  that  are  callable  from  R.
This  code  has  been  written  and  commented  so  that  it  should  be  easy  to  understand
how  the   interface   handles   the   input   and   output   structures,   prints   to   the   screen   and
communicates  with  the  MOSEK  optimization  library.   Four  important  references  proved
to  be  useful  in  the  development  of  this  interface.
  The  MOSEK  C  API  Reference
24
  R  Language  Denition
25
  Writing  R  Extensions
  25
  R  Installation  and  Administration
25
The  program  is  free  software  and  has  been  released  under  the  terms  of   the  GNU  Lesser
General Public License as published by the Free Software Foundation, either version 2.1 of
the License,  or any later version.   You are welcome to extend upon this release and modify
the  interface  to  your  personal  needs.   The  source  code  is  distributed  as  part  of  the Rmosek
package.
Notes  on  the  Matrix  package:   A  number  of  bugxes  were  introduced  in  the  Matrix
package  posterior  to  dependencies  on  R(>=2.12.0).   These  xes  have  been  backported  in
this  package,   to  older  versions  of  the  package  Matrix(>=0.9996875-3)  depending  only  on
R(>=2.10.0).   Eectively, more users will be able to run Rmosek.   The source code for this
backport is located in the compatibility  folder from which it is included in the local_stubs.h
and  local_stubs.cc  les.
24
Check  out:   mosek.com  Documentation  C  API  manual  API  reference.
25
Check  out:   r-project.org  Manuals
68
4.2   Limitations  and  Missing  Features   DEVELOPERS  CORNER
4.2   Limitations  and  Missing  Features
The interface currently provides the features needed for basic research and educational pur-
poses,   but  has  many  shortcomings  when  compared  to  the  Matlab-to-MOSEK  interface
26
.
This  is  a  short  list  of  missing  features.
1.   Direct input of Quadratic Constrained Programs:   At the moment the user will
need to manually transform the Quadratic Constrained Problem to a Conic Program
as   the   interface   only  provides   for   the   direct   input   of   linear   and  conic   programs.
The  MOSEK  optimization  library  actually  supports  the  direct  input  of   Quadratic
Constrained  programs,  so  this  should  only  require  a  small  amount  of  coding.
2.   Custom  R-coded  callback  functions:   While  the  MOSEK  optimization  library
executes   an  optimization  procedure,   it   is   possible  to  dene  a  callback  function  to
check  on  the  progress  and  terminate  ahead  of  time.   It  could  be  useful  to  make  this
possibility  available  from  R,   such  that  a  callback  option  pointing  to  an  R-function
could  be  specied.
26
Check  out:   mosek.com  Optimization  toolbox  for  MATLAB.
69
COMMAND  REFERENCE
A   Command  reference
   r   <-   mosek_version()
Retrieves   a  string  containing  the   version  number   of   the   utilized  MOSEK  optimization
library.   Briey  mentioned  in  Section  2.1.
   r   <-   mosek(problem,   opts   =   list())
Solve an optimization problem using the MOSEK optimization library.   The input variable
problem  could  have  any  name,   but   should  be  a  list   object   describing  the  optimization
problem  using  the  following  elds.
problem   Problem  description
..$sense   Objective  sense,  e.g.   "max"  or  "min"
..$c   Objective  coecients
..$c0   Objective  constant
..$A   Constraint  matrix
..$bc   Lower  and  upper  constraint  bounds
..$bx   Lower  and  upper  variable  bounds
..$cones   Conic  constraints
..$bardim   Semidenite  variable  dimensions
..$barc   Semidenite  objective  coecients
..$barA   Semidenite  constraint  coecients
..$intsub   Integer  variable  indexes
..$qobj   Quadratic  convex  optimization
..$scopt   Separable  convex  optimization
..$iparam/$dparam/$sparam   Parameter  list
....$<MSK_PARAM>   Value  of  any  <MSK_PARAM>
..$sol   Initial  solution  list
....$itr/$bas/$int   Initial  solution  description
The  argument  sense  is  the  goal  of  the  optimization  and  should  indicate  whether  we  wish
to  maximize  or  minimize  the  objective  function  given  by  f(x)  =  c
T
x + c
0
,   where  c  is  the
objective  coecients  and  c0  the  objective  constant.   The  matrix  A  together  with  bounds
bc  describes  the  linear  constraints  of  the  problem,  while  variable  bounds  are  given  by  bx.
These  input  arguments  describe  a  linear  program  (see  Section  2.2).
The argument cones is used for Conic Programming and has been described in Section
2.3.   The issue regarding the transformation of other convex problems to the formulation of a
Conic Program has been addressed in Appendix B, with more details on the transformation
of  Quadratic  Programs  in  Section  2.3.3.
70
COMMAND  REFERENCE
The  arguments  bardim,   barc  and  barA  are  used  for  Semidenite  Programming  and
have  been  described  in  Section  2.4.
The  argument  intsub  is  used  to  specify  whether  some  of  the  variables  should  only  be
allowed  to  take  integer  values  as  part  of  a  feasible  solution.   This  is  necessary  for  the  class
of  Mixed  Integer  Programs  described  in  Section  2.5.
The  argument   qobj   is   used  for   quadratic  convex  optimization  and  allows   quadratic
terms  in  the  objective  function.   This  argument  has  been  described  in  Section  3.2.
The  argument  scopt   is  used  for  separable  convex  optimization  and  allows  non-linear
unary operators in formulations involving exponential or logarithmic terms.   This argument
has  been  described  in  Section  3.3.
The  arguments   iparam,   dparam  and  sparam  are  used  to  specify  lists   of   integer-,
double-  and  string-typed  parameters  for  the  MOSEK  optimization  library  in  order  to  aid
or  take  control  of  the  optimization  process.   This  has  been  described  in  Section  2.6.
The argument sol is used to specify an initial solution used to hot-start the optimization
process,   which  is  likely  to  increase  the  solution  speed.   This  has  been  described  for  linear
programming  in  Section  2.2.4  and  for  mixed  integer  programming  in  Section  2.5.2.
The   options   opts   could   have   any   name,   and   are,   in  fact,   often  input   directly   as   an
anonymous  list.   It  has  the  following  elds.
opts   Options  for  the  mosek  function
..$verbose   Output  logging  verbosity
..$usesol   Whether  to  use  the  initial  solution
..$useparam   Whether  to  use  the  specied  parameter  settings
..$soldetail   Level  of  detail  used  to  describe  solutions
..$getinfo   Whether  to  extract  MOSEK  information  items
..$writebefore   Filepath  used  to  export  model
..$writeafter   Filepath  used  to  export  model  and  solution
The  argument  verbose  is  used  to  specify  the  amount  of  logging  information  given  by
the  interface.   This  is  described  in  Section  2.8.1.
The  argument   usesol   can  be  used  to  ignore  the  problem  eld  sol,   while  useparam
can be used to ignore the elds iparam,  dparam  and sparam,  without having to change the
problem  description.   This  usage  has  not  been  covered  by  the  user  guide,  but  it  should  be
noted  that  both  options  are  true  by  default  in  contrary  to  the  context  of  Section  2.7.
The  arguments  soldetail   and  getinfo  are  used  to  control   the  amount  of  information
returned  about  the  solution,  and  the  MOSEK  optimization  library  internals,  respectively.
This  is  described  in  Section  2.8.4.
71
COMMAND  REFERENCE
The  arguments  writebefore  and  writeafter  are  used  to  see  the  optimization  model
(and  identied  solution)   constructed  by  MOSEK,   written  out   to  a  le   just   before   (or
immediately  after)  the  optimization  process.   This  is  described  in  Section  2.7.
The  resulting  function  output  variable  r,   returned  by  the  interface,   is  capable  of  holding
the  three  types  of  solutions  listed  below,  along  with  the  response  of  the  function  call.   The
existence  of   a  solution  depends  on  the  optimization  problem  and  the  algorithm  used  to
solve  it.
r   Result  of  the  mosek  function
..$response   Response  from  the  MOSEK  optimization  library
....$code   ID-code  of  response
....$msg   Human-readable  message
..$sol   Solution  list
....$itr/$bas/$int   Solution  description
......$solsta   Solution  status
......$prosta   Problem  status
......$skc   Linear  constraint  status  keys
......$skx   Variable  bound  status  keys
......$skn   Conic  constraint  status  keys
......$xc   Constraint  activities
......$xx   Variable  activities
......$barx   Semidenite  variable  activities
......$slc   Dual  variable  for  constraint  lower  bounds
......$suc   Dual  variable  for  constraint  upper  bounds
......$slx   Dual  variable  for  variable  lower  bounds
......$sux   Dual  variable  for  variable  lower  bounds
......$snx   Dual  variable  of  conic  constraints
......$bars   Dual  variable  of  semidenite  domains
......$pobjval   Primal  objective  value
27
......$dobjval   Dual  objective  value
27
......$pobjbound   Best  primal  objective  bound  from  relaxations
27
......$maxinfeas   Maximal  solution  infeasibilities
27
........$pbound   In  primal  inequality  constraints
........$peq   In  primal  equality  constraints
........$pcone   In  primal  cone  constraints
........$dbound   In  dual  inequality  constraints
........$deq   In  dual  equality  constraints
........$dcone   In  dual  cone  constraints
27
Only  available  if  requested  by  option  soldetail.
72
COMMAND  REFERENCE
........$int   In  integer  variables
..$iinfo/$dinfo   MOSEK  information  list
28
....$<MSK_INFO>   Value  of  any  <MSK_INFO>
The  response  can  be  used  to  determine  whether  a  function  call  returned  successfully.
It   contains   a  numeric  response  code,   code,   and  a  string  response  message,   msg,   both
explained in Errors, warnings and response codes in Section 2.2.1.   Please also see Section
2.2.5  on  how  to  use  this  information.
The  solution  itr   will   exist  whenever  the  interior-point  algorithm  has  been  executed.
This  algorithm  is  used  by  default  for  all  continuous  optimization  problems,   and  has  been
described  in  Section  2.2,  2.3  and  2.4.
The solution bas will exist whenever the interior-point algorithm or the simplex method
has  been  executed  to  solve  a  linear  optimization  problem  (see  Section  2.2).
The  solution  int   will   exist  whenever  variables  are  required  to  take  integer  values  and
the  corresponding  Mixed  Integer  Program  has  been  solved  (see  Section  2.5).
The   iinfo   and   dinfo   lists   hold   information   on   the   MOSEK  optimization   library
internals,  and  have  been  described  in  Section  2.8.4.
   mosek_clean()
Forces  the  early  release  of   any  previously  acquired  MOSEK  license.   If   you  do  not  share
a  limited  number  of  licenses  among  multiple  users,   you  do  not  need  to  use  this  function.
The  acquisition  of  a  new  MOSEK  license  will  automatically  take  place  at  the  next  call  to
the  function  mosek  given  a  valid  problem  description,  using  a  small  amount  of  extra  time.
This  usage  is  briey  mentioned  in  Section  2.1.
For  advanced  users:   If   you  utilize  the  .Call   convention  directly,   bypassing  the  mosek  R-
function  denition,   an  Rf_error  will  result  in  an  unclean  memory  space.   For  this  reason
you  can  also  use   mosek_clean  to  tidy  up  uncleaned  resources   in  case   an  error   occurs.
Otherwise  this  cleaning  will  not  happen  until  the  next  call  to  mosek  or  until  the  library  is
unloaded.   This  usage  have  not  been  documented  elsewhere.
   r   <-   mosek_write(problem,   filepath,   opts   =   list())
Outputs  a  model  of  an  optimization  problem  in  any  standard  modeling  leformat  (e.g.   lp,
opf,   mps,   task,   etc.),   controlled  by  a  set  of   options.   The  modeling  leformat  is  selected
based on the extension of the modelle.   This function is used and explained in Section 2.7.
28
Only  available  if  requested  by  option  getinfo.
73
COMMAND  REFERENCE
The input variable problem could have any name, but should be a list object describing
the  optimization  problem  using  the  same  elds  as  for  the  mosek  function  on  page  70.
The  input  variable  lepath  should  be  a  string  describing  the  path  to  modelle.   This
path  can  either  be  absolute  or  relative  to  the  working  directory,   and  will   overwrite  any
existing data on this location.   The specied location will be the destination of the exported
model.
The  options   opts  could  have  any  name,   and  are,   in  fact,   often  specied  directly  as   an
anonymous  list.   It  has  the  following  elds.
opts   Options  for  the  mosek_write  function
..$verbose   Output  logging  verbosity
..$usesol   Whether  to  write  an  initial  solution
..$useparam   Whether  to  write  all  parameter  settings
..$getinfo   Whether  to  extract  MOSEK  information  items
..$scofile   Destination  of  operators  from  scopt
The  argument  verbose  is  used  to  specify  the  amount  of  logging  information  given  by
the  interface.   This  is  described  in  Section  2.6.
The  argument  usesol   can  be  used  to  ignore  the  problem  eld  sol,   while  useparam
can be used to ignore the elds iparam, dparam  and sparam, when writing the optimization
model.   By default, the argument  usesol   is true and  useparam  is false, in contrary to the
defaults  of  the  mosek  function.   These  two  options  are  covered  in  Section  2.7.
The  argument   getinfo  is  used  to  indicate  whether  to  extract  information  about  the
MOSEK  optimization  library  internals.   This  is  described  in  Section  2.8.4.
The  argument   scole  is  used  in  separable  convex  optimization  to  specify  the  le  to
which  the  operators  must  be  saved.   This  is  described  in  Section  3.3.3.
The  resulting  function  output  variable  r,  returned  by  the  interface,  holds  the  response  of
the  function  call.
r   Result  of  the  mosek_write  function
..$response   Response  from  the  MOSEK  optimization  library
....$code   ID-code  of  response
....$msg   Human-readable  message
..$iinfo/$dinfo   MOSEK  information  list
29
....$<MSK_INFO>   Value  of  any  <MSK_INFO>
29
Only  available  if  requested  by  option  getinfo.
74
COMMAND  REFERENCE
The  response  can be used to determined whether a function call returned successfully.
It   contains   a  numeric  response  code,   code,   and  a  string  response  message,   msg,   both
explained in Errors, warnings and response codes in Section 2.2.1.   Please also see Section
2.7  for  an  example  on  how  to  use  this  information.
The   iinfo   and   dinfo   lists   hold   information   on   the   MOSEK  optimization   library
internals,  and  have  been  described  in  Section  2.8.4.
   r   <-   mosek_read(filepath,   opts   =   list())
Interprets  a  model   from  any  standard  modeling  leformat  (e.g.   lp,   opf,   mps,   task,   etc.),
controlled  by  a  set   of   options.   The   result   contains   an  optimization  problem  which  is
compliant   with  the   input   specications   of   function  mosek.   This   function  is   used  and
explained  in  Section  2.7.
The  input  variable  lepath  should  be  a  string  describing  the  path  to  le.   This  path
can  either  be  absolute  or  relative  to  the  working  directory.   The  specied  location  will  be
the  source  of  the  optimization  model  to  be  read.
The   options   opts   could   have   any   name,   and   are,   in  fact,   often  input   directly   as   an
anonymous  list.   It  has  the  following  elds.
opts   Options  for  the  mosek_read  function
..$verbose   Output  logging  verbosity
..$usesol   Whether  to  write  an  initial  solution
..$useparam   Whether  to  write  all  parameter  settings
..$getinfo   Whether  to  extract  MOSEK  information  items
..$scofile   Source  of  operators  read  to  scopt
..$matrixformat   The  sparse  format  of  the  constraint  matrix
The  argument  verbose  is  used  to  specify  the  amount  of  logging  information  given  by
the  interface.   This  is  described  in  Section  2.6.
The  argument  usesol   can  be  used  to  ignore  the  problem  eld  sol,   while  useparam
can be used to ignore the elds iparam, dparam  and sparam, when reading the optimization
model.   By default, the argument  usesol   is true and  useparam  is false, in contrary to the
defaults  of  the  mosek  function.   These  two  options  are  covered  in  Section  2.7.
The  argument   getinfo  is  used  to  indicate  whether  to  extract  information  about  the
MOSEK  optimization  library  internals.   This  is  described  in  Section  2.8.4.
75
COMMAND  REFERENCE
The  argument  scole  is  used  in  separable  convex  optimization  to  specify  the  le  from
which  the  operators  should  be  read.   This  is  described  in  Section  3.3.3.
The  argument  matrixformat   determines  the  matrix  format  of  the  problem  eld  A.
Currently  a  sparse  coordinate  (COO)  and  a  compressed  sparse  column  (CSC)  format  is
supported  from  package  Matrix,   along  with  a  simple  list-based  coordinate  format.   This
is  described  in  Section  2.7.
The  resulting  function  output  variable  r,  returned  by  the  interface,  holds  the  response  of
the  function  call.
r   Result  of  the  mosek_read  function
..$response   Response  from  the  MOSEK  optimization  library
....$code   ID-code  of  response
....$msg   Human-readable  message
..$prob   Problem  description
..$iinfo/$dinfo   MOSEK  information  list
30
....$<MSK_INFO>   Value  of  any  <MSK_INFO>
The  response  can be used to determined whether a function call returned successfully.
It   contains   a  numeric  response  code,   code,   and  a  string  response  message,   msg,   both
explained in Errors, warnings and response codes in Section 2.2.1.   Please see Section 2.7
for  an  example  on  how  to  use  this  information.
The output variable prob is a list object describing the imported optimization problem.
It  has  the  same  elds  as  the  input  variable  problem  for  the  mosek  function  on  page  70.
The   iinfo   and   dinfo   lists   hold   information   on   the   MOSEK  optimization   library
internals,  and  have  been  described  in  Section  2.8.4.
30
Only  available  if  requested  by  option  getinfo.
76
CONIC  TRANSFORMATIONS
B   Conic  transformations
This   appendix  will   introduce   the   reader   to   some   of   the   transformations   that   make   it
possible  to  formulate  a  large  class  of  non-linear  problems  as  Conic  Quadratic  Programs
31
.
Transformations  do  not  have  to  be  hard,   and  some  non-linear  constraints  easily  take  the
shape  of   a  quadratic  cone  (see  Section  2.3.1  for   denitions).   Linear   constraints   can  be
added  to  a  Conic  Quadratic  Program  directly  without  transformations.
The
 
x  Example   Given  constraint
 
x   t   with  x, t   0,   we  rewrite  it  to  a  rotated
quadratic  cone  as  follows:
x      t
x      t
2
2xr      t
2
(B.1)
with  linear  relationship
r = 1/2 .   (B.2)
The denition of linear relationships hardly have any eect on speed of the optimization
process,   and  many  techniques  are  implemented  in  MOSEK  to  eectively  reduce  the  size
of   the  problem.   Thus  many  CQP  formulations  solve  faster  than  what  could  be  expected
from  the  larger  model  that  results  from  almost  any  transformation.
The  x
3/2
Example   Given  constraint  x
3/2
  t  with  x, t   0,   we  rewrite  it  to  a  pair  of
rotated  quadratic  cones  and  three  linear  relationship  as  follows.   Note  that  from  line  ve
to  six  we  use  the  results  of  the
 
x  example  above.
x
3/2
   t
x
2 1/2
   t
x
2
  
xt
x
2
   2st,   2s   
  
x
x
2
   2st,   w   
  
v,   w = 2s,   v  = x
x
2
   2st,   w
2
   2vr,   w = 2s,   v  = x,   r = 1/2
x
2
   2st,   w
2
   2vr,   w = s,   v  = x,   r = 1/8 .
(B.3)
31
More  examples  and  transformation  rules  can  be  found  online:
Check   out:   mosek.com    Documentation     Optimization   tools   manual     Modeling     Conic
optimization.
77
CONIC  TRANSFORMATIONS
Monomials  in  General   The  crucial   step  in  transforming  the  individual   terms  (mono-
mials)  of  a  polynomial  function  to  the  form  of  a  conic  program,  is  the  following  recursion
holding  true  for  positive  integers  n  and  non-negative  y
j
  variables.   Here,  each  line  implies
the  next,   and  the  inequality  ends  up  having  the  same  form  as  to  begin  with,   only  with
n  reduced  by  one.   Repeating  this  procedure  until   n  =  1,   the  inequality  will   nally  take
the  form  of  a  rotated  quadratic  cone.   From  the  rst  expression,   we  move  the  coecients
around to get line two, substitute variables by the cones in (B.5) to get line three, and take
the  square  root  on  both  sides  to  reach  line  four.
s
2
n
   2
n2
n1
[y
1
 y
2
      y
2
n]
s
2
n
   2
(n1)2
n1
_
(2y
1
y
2
) (2y
3
y
4
)      (2y
(2
n
1)
y
2
n)
_
s
2
n
   2
(n1)2
n1
 _
x
2
1
 x
2
2
      x
2
2
n1
s
2
(n1)
   2
(n1)2
n2
[x
1
 x
2
      x
2
n1]
(B.4)
Also  note  that  the  denition  of  new  variables  created  by  this  recursion  all  takes  the  form
of  a  rotated  quadratic  cone  as  shown  below,  with  all  y
j
  0.
x
2
j
    2 y
(2j1)
 y
2j
  j   (B.5)
Type  I   Having  the  inequality  x
p/q
  t  for  some  rational  exponent  p/q   1,   we  are  now
able  to  transform  it  into  a  set  of  cones  by  substituting  p  =  2
n
 w  for  positive  integers  n
and w.   This will yield a form close to the top of  (B.4) as shown below, where the variables
x  and  t  just  needs  to  be  coupled  with  s  and  y  for  the  recursion  to  be  usable.   An  example
of  this  follows  in  (B.9).
x
p/q
   t
x
(2
n
w)/q
   t
x
2
n
   x
w
t
q
(B.6)
Type  II   The  inequality  x
p/q
 t  for  some  rational  exponent  0  p/q  1  and  x  0,  can
be  transformed  into  the  form  of  a  type  I  monomial  quite  easily.
x
p/q
   t
x      t
q/p
  (B.7)
Type  III   Having  instead  the  inequality  x
p/q
 t  for  any  integers  p  1,  q  1,  we  can
use  the  transform  shown  below.   This  again  will   yield  a  form  close  to  the  top  of   (B.4),
where s is a constant and the variables just need to be coupled with y.   You need to choose
n  such  that  2
n
 p + q  in  order  to  make  this  transformation  work.   See  example  (B.10).
x
p/q
   t
1      x
p
t
q
  (B.8)
78
CONIC  TRANSFORMATIONS
Monomials  -  Example  1   Given  x
5/3
 t,  we  rewrite  it  as  shown  below.
x
5/3
   t
x
5
   t
3
x
8
   x
3
t
3
s
8
   2
12
y
1
y
2
   y
8
s = x,   y
1
  = y
2
  = y
3
  = x,   y
4
  = y
5
  = y
6
  = t,   y
7
  = 2
12
,   y
8
  = 1
(B.9)
Monomials  -  Example  2   Given  x
5/2
 t,  we  rewrite  it  as  shown  below.
x
5/2
   t
x
5
   t
2
1      x
5
t
2
s
8
   2
12
y
1
y
2
   y
8
s = (2
12
)
1/8
,   y
1
  = y
2
  = . . . = y
5
  = x,   y
6
  = y
7
  = t,   y
8
  = 1
(B.10)
Polynomials   Now that we have seen how to transform the individual terms (monomials)
of   a  polynomial   function  to  the  form  of   a  conic  program,   it   is   easy  to  transform  entire
polynomial  constraints.   We  assume  a
j
  0  for  monomials  of  type  I  and  III,  and  a
j
  0  for
monomials  of  type  II.
n
j=1
 a
j
x
p
j
/q
j
j
     b
  (B.11)
Substituting each monomial by a new variable u
j
, we are able to isolate each monomial by
itself  and  use  the  previously  dened  transformation  rules.
n
j=1
 a
j
u
j
     b
x
p
j
/q
j
j
     u
j
  j  of  type  I  and  III
x
p
j
/q
j
j
     u
j
  j  of  type  II
(B.12)
Regarding the objective function, minimization problems can be rewritten as shown below
matching (B.11) and, therefore, it is applicable to the isolation of monomials and the whole
transformation used above.   In this way the objective function can also take the linear form
required  by  conic  programs.
minimize   f(x)   
  minimize   b
f(x)  b
  (B.13)
79
CONIC  TRANSFORMATIONS
Polynomials  -  Example  1   This  is  a  problem  with  type  III  monomials
minimize   c
T
x
subject  to
  
n
j=1
f
j
x
j
   b
x  0
(B.14)
where  it  is  assumed  that  f
j
  > 0  and  b > 0.   It  is  equivalent  to
minimize   c
T
x
subject  to
  
n
j=1
 f
j
z
j
  =   b
v
j
  =
2   j  = 1, . . . , n
v
2
j
     2z
j
x
j
  j  = 1, . . . , n
x, z  0
(B.15)
Polynomials  -   Example  2   The  objective  function  with  a  mixture  of   type  I  and  type
III  monomials
minimize   x
2
+ x
2
(B.16)
is  used  in  statistical  matching  and  can  be  formulated  as
minimize   u + v
subject  to   x
2
   u
x
2
   v
(B.17)
which  is  equivalent  to  the  quadratic  conic  optimization  problem
minimize   u + v
subject  to   x
2
   2uw
s
2
   2y
21
y
22
y
2
21
     2y
1
y
2
y
2
22
     2y
3
y
4
w   =   1
s   =   2
3/4
y
1
  =   y
2
  =   x
y
3
  =   v
y
4
  =   1
(B.18)
80
B.1   The  Quadratic  Program   CONIC  TRANSFORMATIONS
B.1   The  Quadratic  Program
Any  convex  quadratic  problem  can  be  stated  on  the  form
minimize   0.5|Fx|
2
+ c
T
x
subject  to   0.5|Gx|
2
+ a
T
x      b
  (B.19)
where  F  and  G  are  matrices  and  c  and  a  are  column  vectors.   Here  a  convex  quadratic
term  x
T
Qx  would  be  written |Fx|
2
,   where  F
T
F  =  Q  is  the  Cholesky  factorization.   For
simplicitys sake we assume that there is only one constraint, but it should be obvious how
to  generalize  the  methods  to  an  arbitrary  number  of  constraints.   Problem  (B.19)  can  be
reformulated  as
minimize   0.5|t|
2
+ c
T
x
subject  to   0.5|z|
2
+ a
T
x      b
Fx t   =   0
Gx z   =   0
(B.20)
after  the  introduction  of  the  new  variables  t  and  z.   It  is  easy  to  convert  this  problem  to  a
conic  quadratic  optimization  problem,  i.e.
minimize   v + c
T
x
subject  to   p + a
T
x   =   b
Fx t   =   0
Gx z   =   0
w   =   1
q   =   1
|t|
2
   2vw   v, w  0
|z|
2
   2pq   p, q  0
(B.21)
In  this  case  the  last  two  inequalities  both  take  the  form  of  a  rotated  quadratic  cone,   and
the   entire   problem  can  be   solved  as   a  conic   quadratic   program,   Section  2.3,   using  the
interior-point  algorithm.
If   F  is  a  non-singular  matrix  -   e.g.   a  diagonal   matrix  -   then  x  can  be  eliminated  from
the  problem  using  the  substitution  x  =  F
1
t.   In  most  cases  the  MOSEK  optimization
library  will   perform  this   reduction  automatically  during  the   presolve   phase   before   the
actual  optimization  is  performed.
81