The positive rank of a square matrix is defined in Theorem $3$ of “Expressing Combinatorial Optimization Problems by Linear Programs” by Mihalis Yannakakis as follows: given a $ntimes n$ matrix $A$, the positive rank $rank_{Bbb R}^+(A)$ is the smallest $m$ such that $A=LR$ for a non-negative $ntimes m$ matrix $L$, and non-negative $mtimes n$ matrix $R$.
This concept is valuable in communication complexity, since it was shown that if $rank_{Bbb R}^+(A)$ and $rank_{Bbb R}(A)$ could be subexponentially related for a $0/1$ matrix $A$, then the log-rank conjecture holds.
Is there an exponential separation between $rank_{Bbb R}^+(A)$ and $rank_{Bbb R}(A)$ for a general non-negative real matrix $A$ (as opposed to just $0/1$) or is this problem also open?
I checked the references in Jukna’s book, but I am still unable to clarify the above question.