logo

Demonstrations for the theory of proveit.linear_algebra.tensors

In [1]:
import proveit
from proveit import defaults, UnsatisfiedPrerequisites
from proveit import Function, ExprRange, ExprTuple, IndexedVar
from proveit import a, f, i, u, v, w, x, y, z, alpha, beta, fi, gamma, delta
from proveit.logic import Forall, Equals, NotEquals, InSet, CartExp
from proveit.numbers import Natural, Real, Complex
from proveit.numbers import one, two, three, four, five, Interval
from proveit.linear_algebra import (
    VecSpaces, VecAdd, VecSum, VecZero, ScalarMult, TensorProd, TensorExp)
%begin demonstrations

Vector space default assumptions

In order to apply the tensor product theorems, we need the operands to be known as vectors in vector spaces over a common field. For convenience, we may specify a default field, or we may specify a field when calling the TensorProd methods.

Let's set some defaults for convienience in our testing below.

In [2]:
R3 = CartExp(Real, three)
R3:
In [3]:
C3 = CartExp(Complex, three)
C3:
In [4]:
defaults.assumptions = [
    InSet(u, R3), InSet(v, R3), InSet(w, R3), 
    InSet(x, R3), InSet(y, R3), InSet(z, R3),
    NotEquals(x, VecZero(R3)), NotEquals(z, VecZero(R3)),
    InSet(a, Real), InSet(alpha, Complex), InSet(beta, Real),
    InSet(gamma, Real), InSet (delta, Real),
    Forall(i, InSet(fi, R3), domain=Natural)]
defaults.assumptions:
In [5]:
summand_assumption = defaults.assumptions[-1]
summand_assumption:
In [6]:
summand_assumption.instantiate(assumptions=[summand_assumption,
                                            InSet(i, Natural)])

Some Example TensorProd For Testing

In [7]:
tensor_prod_00 = TensorProd(ScalarMult(a, x), y)
tensor_prod_00:
In [8]:
tensor_prod_000 = TensorProd(x, ScalarMult(a, y))
tensor_prod_000:
In [9]:
tensor_prod_01 = TensorProd(ScalarMult(a, x), y, z)
tensor_prod_01:
In [10]:
tensor_prod_02 = TensorProd(x, ScalarMult(a, y), z)
tensor_prod_02:
In [11]:
tensor_prod_03 = TensorProd(x, y, ScalarMult(a, z))
tensor_prod_03:
In [12]:
tensor_prod_04 = TensorProd(u, TensorProd(v, ScalarMult(a, w), x, 
                                          ScalarMult(alpha, y)))
tensor_prod_04:
In [13]:
tensor_prod_05 = TensorProd(u, ScalarMult(a, v), VecAdd(w, x, y), z)
tensor_prod_05:
In [14]:
tensor_prod_06 = TensorProd(VecAdd(x, y), z)
tensor_prod_06:
In [15]:
tensor_prod_07 = TensorProd(x, VecAdd(y, z))
tensor_prod_07:
In [16]:
tensor_prod_08 = TensorProd(u, v, VecAdd(w, x, y), z)
tensor_prod_08:
In [17]:
tensor_prod_with_sum_01 = TensorProd(x, VecSum(i, Function(f, i), domain=Interval(one, three)), z)
tensor_prod_with_sum_01:
In [18]:
tensor_prod_with_sum_02 = TensorProd(VecSum(i, Function(f, i), domain=Interval(two, four)), z)
tensor_prod_with_sum_02:
In [19]:
tensor_prod_with_sum_03 = TensorProd(x, VecSum(i, Function(f, i), domain=Interval(two, four)))
tensor_prod_with_sum_03:
In [20]:
tensor_prod_with_sum_04 = TensorProd(u, v, w, x, VecSum(i, Function(f, i), domain=Interval(one, five)), z)
tensor_prod_with_sum_04:
In [21]:
tensor_prod_inside_sum_01 = VecSum(i, TensorProd(x, Function(f, i)), domain=Interval(two, four))
tensor_prod_inside_sum_01:
In [22]:
tensor_prod_inside_sum_02 = VecSum(i, TensorProd(y, Function(f, i)), domain=Interval(two, four))
tensor_prod_inside_sum_02:
In [23]:
tensor_prod_inside_sum_03 = VecSum(i, TensorProd(z, Function(f, i)), domain=Interval(two, four))
tensor_prod_inside_sum_03:
In [24]:
tensor_prod_inside_sum_04 = VecSum(i, TensorProd(Function(f, i), x), domain=Interval(two, four))
tensor_prod_inside_sum_04:
In [25]:
tensor_prod_inside_sum_05 = VecSum(i, TensorProd(x, Function(f, i), z), domain=Interval(two, four))
tensor_prod_inside_sum_05:
In [26]:
tensor_prod_inside_sum_06 = VecSum(i, TensorProd(x, y, Function(f, i), z), domain=Interval(two, four))
tensor_prod_inside_sum_06:

Upon implementing Issue #28, or something along those lines, the following should not be necessary:

In [27]:
i_domains = [tensor_prod_with_sum_01.operands[1].domain,
             tensor_prod_with_sum_03.operands[1].domain,
             tensor_prod_with_sum_04.operands[-2].domain]
judgments = []
for i_domain in i_domains:
    judgment = summand_assumption.instantiate(
        assumptions=[summand_assumption, InSet(i, i_domain)])
    judgments.append(judgment)
judgments

TensorProd simplification

In [28]:
help(TensorProd.shallow_simplification)
Help on function shallow_simplification in module proveit.linear_algebra.tensors.tensor_prod:

shallow_simplification(self, *, must_evaluate=False, **defaults_config)
    Returns a proven simplification equation for this TensorProd
    expression assuming the operands have been simplified.
    
    Currently deals only with:
    (1) simplifying a TensorProd(x) (i.e. a TensorProd with a
        single operand x) to x itself. For example,
        TensorProd(x) = x.
    (2) Ungrouping nested tensor products.
    (3) Factoring out scalars.
    
    Keyword arguments are accepted for temporarily changing any
    of the attributes of proveit.defaults.
    
    'shallow_simplified' returns the right-hand side of 'shallow_simplification'.
    'shallow_simplify', called on an InnerExpr of a Judgment,
    substitutes the right-hand side of 'shallow_simplification' for
    the inner expression.

First test out unary TensorProd simplification.

In [29]:
TensorProd(x).simplification()

Our next test will involve ungrouping and pulling out scalar factors but will not work without a proper default field specified since it mixes real vectors with a complex scalar.

In [30]:
tensor_prod_04
In [31]:
VecSpaces.default_field = Complex
VecSpaces.default_field:
In [32]:
defaults.assumptions
In [33]:
tensor_prod_04.simplification()
, , , , , ,  ⊢  

The TensorProd.membership_object() method

In [34]:
x_y_z = TensorProd(x, y, z)
x_y_z:
In [35]:
R3_R3_R3 = TensorProd(R3, R3, R3)
R3_R3_R3:
In [36]:
# So, this is interesting … generates an error
# VecSpaces.known_field(R3_R3_R3)
In [37]:
# THIS HANGS FOREVER
# this suggests it would be good to have a TensorProd.containing_vec_space() method
# but this was working before implementing the TensorProdMembership class ?!
# field=Real
# temp_vec_space = VecSpaces.known_vec_space(x_y_z, field=field)
In [38]:
# VecSpaces.known_field(temp_vec_space)
WORKING HERE

A manual construction of the instantiation process for the tensor_prod_is_in_tensor_prod_space theorem from the proveit.linear_algebra.tensors package.

In [39]:
from proveit.linear_algebra.tensors import tensor_prod_is_in_tensor_prod_space
tensor_prod_is_in_tensor_prod_space
In [40]:
temp_self = InSet(x_y_z, TensorProd(R3, R3, R3))
temp_self:
In [41]:
_a_sub = temp_self.element.operands
_a_sub:
In [42]:
_i_sub = _a_sub.num_elements()
_i_sub:
In [43]:
# This does NOT work, nor does get_field appear designed for this
_K_sub = VecSpaces.get_field(temp_self.domain)
_K_sub:
In [44]:
# THIS HANGS!

# we could try the following, which seems unpromising given that we already
# know what vec space we should be dealing with and should be able to derive
# the correct field instead of supplying it; notice that the result is not
# even satisfactory, returning Complexes instead of obvious Reals
# temp_self.element.deduce_in_vec_space(field=None)
In [45]:
# try this instead:
# does not work if we omit the automation=True
domain_is_vec_space = temp_self.domain.deduce_as_vec_space(automation=True)
domain_is_vec_space:  ⊢  
In [46]:
# followed by this:
_K_sub = VecSpaces.known_field(temp_self.domain)
_K_sub:
In [47]:
vec_spaces = temp_self.domain.operands
vec_spaces:
In [48]:
from proveit import a, i, K, V
tensor_prod_is_in_tensor_prod_space.instantiate(
        {a: _a_sub, i: _i_sub, K: _K_sub, V: vec_spaces})
, ,  ⊢  
In [49]:
InSet(x_y_z, TensorProd(R3, R3, R3)).conclude()
, ,  ⊢  
In [50]:
InSet(x_y_z, TensorProd(C3, C3, C3)).conclude()
, ,  ⊢  

The TensorProd.association() and TensorProd.disassociation() methods

In [51]:
help(TensorProd.association)
Help on function association in module proveit.linear_algebra.tensors.tensor_prod:

association(self, start_idx, length, *, field=None, **defaults_config)
    Given vector operands, or all CartExp operands, deduce that
    this expression is equal to a form in which operands in the
    range [start_idx, start_idx+length) are grouped together.
    For example, calling
    (a ⊗ b ⊗ ... ⊗ y ⊗ z).association(l, m-l+1)
    would return
    |- (a ⊗ b ⊗ ... ⊗ y ⊗ z) = 
        (a ⊗ b ... ⊗ (l ⊗ ... ⊗ m) ⊗ ... ⊗ y ⊗ z)
    Or calling (R3 ⊗ R3 ⊗ R3).associate(1, 2) would return
    |- (R3 ⊗ R3 ⊗ R3) = (R3 ⊗ (R3 ⊗ R3))
    
    For this to work in the vectors case, the vector operands must
    be known to be in vector spaces of a common field.  If the
    field is not specified, then VecSpaces.default_field is used.
    For this to work in the case of CartExp operands, all operands
    must be (recursively) CartExps and each must be known to be
    a vector space.
    
    Keyword arguments are accepted for temporarily changing any
    of the attributes of proveit.defaults.
    
    'associated' returns the right-hand side of 'association'.
    'associate', called on an InnerExpr of a Judgment,
    substitutes the right-hand side of 'association' for
    the inner expression.

In [52]:
help(TensorProd.disassociation)
Help on function disassociation in module proveit.linear_algebra.tensors.tensor_prod:

disassociation(self, idx, *, field=None, **defaults_config)
    Given vector operands, or all CartExp operands, deduce that
    this expression is equal to a form in which operand at index
    idx is no longer grouped together.
    For example, calling
    (a ⊗ b ⊗ ... (l ⊗ ... ⊗ m) ... ⊗ y ⊗ z).association(l-1)
    would return
    |- (a ⊗ b ⊗ ... (l ⊗ ... ⊗ m) ... ⊗ y ⊗ z) = 
        (a ⊗ b ⊗ ... ⊗ l ⊗ ... ⊗ m ⊗ ... ⊗ y ⊗ z)
    Or calling (R3 ⊗ (R3 ⊗ R3)).disassociate(1) would return
    |- (R3 ⊗ (R3 ⊗ R3)) = (R3 ⊗ R3 ⊗ R3) 
    
    For this to work in the vectors case, the vector operands must
    be known to be in vector spaces of a common field.  If the
    field is not specified, then VecSpaces.default_field is used.
    For this to work in the case of CartExp operands, all operands
    must be (recursively) CartExps and each must be known to be
    a vector space.
    
    Keyword arguments are accepted for temporarily changing any
    of the attributes of proveit.defaults.
    
    'disassociated' returns the right-hand side of 'disassociation'.
    'disassociate', called on an InnerExpr of a Judgment,
    substitutes the right-hand side of 'disassociation' for
    the inner expression.

In [53]:
# without the automation=True, gives error
# with automation=True, just HANGS
# tensor_prod_with_sum_04.association(1, 4, automation=True)
In [54]:
tensor_prod_04.disassociation(1, auto_simplify=False)
, , , , , ,  ⊢  

The TensorProd.scalar_factorization() method

In [55]:
help(TensorProd.scalar_factorization)
Help on function scalar_factorization in module proveit.linear_algebra.tensors.tensor_prod:

scalar_factorization(self, idx=None, *, field=None, **defaults_config)
    Prove the factorization of a scalar from one of the tensor 
    product operands and return the original tensor product equal 
    to the factored version.  If idx is provided, it will specify 
    the (0-based) index location of the ScalarMult operand with the
    multiplier to factor out.  If no idx is provided, the first 
    ScalarMult operand will be targeted.
    
    For example,
        TensorProd(a, ScalarMult(c, b), d).factorization(1)
    returns
        |- TensorProd(a, ScalarMult(c, b), d) =
           c TensorProd(a, b, d)
    
    As a prerequisite, the operands must be known to be vectors in
    vector spaces over a common field which contains the scalar
    multiplier being factored.  If the field is not specified,
    then VecSpaces.default_field is used.
    
    Keyword arguments are accepted for temporarily changing any
    of the attributes of proveit.defaults.
    
    'scalar_factorized' returns the right-hand side of 'scalar_factorization'.
    'factor_scalar', called on an InnerExpr of a Judgment,
    substitutes the right-hand side of 'scalar_factorization' for
    the inner expression.

In [56]:
tensor_prod_00
In [57]:
tensor_prod_00.scalar_factorization(0, field=Real)
, ,  ⊢  

By default, the first ScalarMult operand will be the target. Also, we may use the default field, VecSpaces.default_field.

In [58]:
VecSpaces.default_field = Real
VecSpaces.default_field:
In [59]:
tensor_prod_000.scalar_factorization()
, ,  ⊢  
In [60]:
tensor_prod_01.scalar_factorization()
, , ,  ⊢  
In [61]:
tensor_prod_02.scalar_factorization(1)
, , ,  ⊢  
In [62]:
tensor_prod_03.scalar_factorization()
, , ,  ⊢  
In [63]:
tensor_prod_05.scalar_factorization()
, , , , , ,  ⊢  

The TensorProd.distribution() method

In [64]:
help(TensorProd.distribution)
Help on function distribution in module proveit.linear_algebra.tensors.tensor_prod:

distribution(self, idx, *, field=None, **defaults_config)
    Given a TensorProd operand at the (0-based) index location
    'idx' that is a vector sum or summation, prove the distribution
    over that TensorProd factor and return an equality to the 
    original TensorProd. For example, we could take the TensorProd
        tens_prod = TensorProd(a, b+c, d)
    and call tens_prod.distribution(1) to obtain:
        |- TensorProd(a, b+c, d) =
           TensorProd(a, b, d) + TensorProd(a, c, d)
    
    Keyword arguments are accepted for temporarily changing any
    of the attributes of proveit.defaults.
    
    'distributed' returns the right-hand side of 'distribution'.
    'distribute', called on an InnerExpr of a Judgment,
    substitutes the right-hand side of 'distribution' for
    the inner expression.

In [65]:
tensor_prod_05
In [66]:
tensor_prod_05.distribution(2)
, , , , , ,  ⊢  
In [67]:
tensor_prod_06
In [68]:
tensor_prod_06.distribution(0)
, ,  ⊢  
In [69]:
tensor_prod_07
In [70]:
tensor_prod_07.distribution(1)
, ,  ⊢  
In [71]:
tensor_prod_08
In [72]:
tensor_prod_08.distribution(2)
, , , , ,  ⊢  
In [73]:
tensor_prod_with_sum_01
In [74]:
tensor_prod_with_sum_01.distribution(1)
, ,  ⊢  
In [75]:
temp_self = tensor_prod_with_sum_01
temp_self:
In [76]:
field=None
_V_sub = VecSpaces.known_vec_space(temp_self, field=field)
_V_sub:
In [77]:
# notice here that _V_sub is a TensorProd, not a VecSpace
type(_V_sub)
proveit.linear_algebra.tensors.tensor_prod.TensorProd
In [78]:
_K = VecSpaces.known_field(_V_sub)
_K:
In [79]:
idx = 1
sum_factor = temp_self.operands[idx]
sum_factor:
In [80]:
_a_sub = temp_self.operands[:idx]
_a_sub:
In [81]:
_c_sub = temp_self.operands[idx+1:]
_c_sub:
In [82]:
_i_sub = _a_sub.num_elements()
_i_sub:
In [83]:
_k_sub = _c_sub.num_elements()
_k_sub:
In [84]:
_b_sub = sum_factor.indices
_b_sub:
In [85]:
_j_sub = _b_sub.num_elements()
_j_sub:
In [86]:
from proveit import Lambda
_f_sub = Lambda(sum_factor.indices, sum_factor.summand)
_f_sub:
In [87]:
_Q_sub = Lambda(sum_factor.indices, sum_factor.condition)
_Q_sub:
In [88]:
from proveit import K, f, Q, i, j, k, V, a, b, c
from proveit.linear_algebra.tensors import tensor_prod_distribution_over_summation
impl = tensor_prod_distribution_over_summation.instantiate(
    {K:_K_sub, f:_f_sub, Q:_Q_sub, i:_i_sub, j:_j_sub, k:_k_sub,
     V:_V_sub, a:_a_sub, b:_b_sub, c:_c_sub},
    preserve_all=True)
impl:  ⊢  
In [89]:
tensor_prod_with_sum_02
In [90]:
tensor_prod_with_sum_02.distribution(0)
In [91]:
tensor_prod_with_sum_03
In [92]:
tensor_prod_with_sum_03.distribution(1)
In [93]:
tensor_prod_with_sum_04
In [94]:
tensor_prod_with_sum_04.distribution(4)
, , , , ,  ⊢  
In [95]:
# recall one of our TensorProd objects without a sum or summation:
tensor_prod_02
In [96]:
# we should get a meaningful error message when trying to distribute across
# a factor that is not a sum or summation:
try:
    tensor_prod_02.distribution(1)
    assert False, "Expecting a ValueError; should not get this far!"
except ValueError as the_error:
    print("ValueError: {}".format(the_error))
ValueError: Don't know how to distribute tensor product over <class 'proveit.linear_algebra.scalar_multiplication.scalar_mult.ScalarMult'> factor

Working on details for a possible TensorProd.factoring() method below.

In [97]:
tensor_prod_inside_sum_01

That is actually equal to:
\begin{align} x \otimes f(2) + x \otimes f(3) + x \otimes f(4) &= x \otimes (f(2) + f(3) + f(4))\\ &= x \otimes \sum_{i=2}^{4} f(i) \end{align} We want to be able to call tensor_prod_inside_sum_01.factorization(idx), where idx indicates the (0-based) index of the tensor product term to leave inside the summation. Clearly any term(s) that are functions of the summation index itself cannot be pulled out, and we cannot change the order of the tensor product terms. In the simple example here, we would call tensor_prod_inside_sum_01.factorization(1), to obtain \begin{align} \vdash \sum_{i=2}^{4} \left[x \otimes f(i)\right] &= x \otimes \sum_{i=2}^{4} f(i) \end{align}

In [98]:
from proveit.linear_algebra.tensors import tensor_prod_distribution_over_summation
tensor_prod_distribution_over_summation
In [99]:
self_test = tensor_prod_inside_sum_01
self_test:
In [100]:
idx = 1
idx: 1
In [101]:
field = None
_V_sub = VecSpaces.known_vec_space(self_test, field=field)
_V_sub:
In [102]:
_K_sub = VecSpaces.known_field(_V_sub)
_K_sub:
In [103]:
_a_sub = self_test.summand.operands[:idx]
_a_sub:
In [104]:
_c_sub = self_test.summand.operands[idx+1:]
_c_sub:
In [105]:
_i_sub = _a_sub.num_elements()
_i_sub:
In [106]:
_k_sub = _c_sub.num_elements()
_k_sub:
In [107]:
_b_sub = self_test.indices
_b_sub:
In [108]:
_j_sub = _b_sub.num_elements()
_j_sub:
In [109]:
_f_sub = Lambda(self_test.indices, self_test.summand.operands[idx])
_f_sub:
In [110]:
_Q_sub = Lambda(self_test.indices, self_test.condition)
_Q_sub:
In [111]:
tensor_prod_distribution_over_summation.instance_params
In [112]:
from proveit import K, f, Q, i, j, k, V, a, b, c
impl = tensor_prod_distribution_over_summation.instantiate(
                    {K:_K_sub, f:_f_sub, Q:_Q_sub, i:_i_sub, j:_j_sub, k:_k_sub, 
                     V:_V_sub, a:_a_sub, b:_b_sub, c:_c_sub}, preserve_all=True)
impl:  ⊢  
In [113]:
impl.derive_consequent().derive_reversed().with_wrapping_at()

The tensor_prod_factoring() method technically is a method called on a VecSum object, but it's good to briefly illustrate its use here in tensors, essentially performing the inverse operation of the TensorProd.distribution() method:

In [114]:
# The numeric argument indicates the tensor product factor 
# to LEAVE inside the VecSum
tensor_prod_inside_sum_01.tensor_prod_factoring(1)
In [115]:
tensor_prod_inside_sum_03
In [116]:
defaults.assumptions
In [117]:
try:
    tensor_prod_inside_sum_03.tensor_prod_factoring(1)
except Exception as the_exception:
    print("Exception: {}".format(the_exception))

Testing/problems for factoring the case for tensor_prod_inside_sum_03, which is identical to tensor_prod_inside_sum_01 except now we have z instead of x and z has all the same properties as x …

In [118]:
self_test = tensor_prod_inside_sum_03
self_test:
In [119]:
field = None
_V_sub = VecSpaces.known_vec_space(self_test, field=field)
_V_sub:
In [120]:
_K_sub = VecSpaces.known_field(_V_sub)
_K_sub:
In [121]:
_a_sub = self_test.summand.operands[:idx]
_a_sub:
In [122]:
_c_sub = self_test.summand.operands[idx+1:]
_c_sub:
In [123]:
_i_sub = _a_sub.num_elements()
_i_sub:
In [124]:
_k_sub = _c_sub.num_elements()
_k_sub:
In [125]:
_b_sub = self_test.indices
_b_sub:
In [126]:
_j_sub = _b_sub.num_elements()
_j_sub:
In [127]:
_f_sub = Lambda(self_test.indices, self_test.summand.operands[idx])
_f_sub:
In [128]:
_Q_sub = Lambda(self_test.indices, self_test.condition)
_Q_sub:
In [129]:
from proveit import K, f, Q, i, j, k, V, a, b, c
impl = tensor_prod_distribution_over_summation.instantiate(
                    {K:_K_sub, f:_f_sub, Q:_Q_sub, i:_i_sub, j:_j_sub, k:_k_sub, 
                     V:_V_sub, a:_a_sub, b:_b_sub, c:_c_sub}, preserve_all=True)
impl:  ⊢  
In [130]:
defaults.assumptions
In [131]:
InSet(z, R3).proven()
True
In [132]:
InSet(VecSum(i, Function(f, i), domain=Interval(two, four)), R3).proven()
True
In [133]:
TensorProd(z, VecSum(i, Function(f, i), domain=Interval(two, four))).deduce_in_vec_space(TensorProd(R3, R3), field=Real)
In [134]:
InSet(TensorProd(z, VecSum(i, Function(f, i), domain=Interval(two, four))), TensorProd(R3, R3)).prove()
In [135]:
impl.derive_consequent()
In [136]:
impl.derive_consequent().derive_reversed().with_wrapping_at()
In [137]:
tensor_prod_inside_sum_02.tensor_prod_factoring(1)
In [138]:
tensor_prod_inside_sum_03.tensor_prod_factoring(1)
In [139]:
tensor_prod_inside_sum_04.tensor_prod_factoring(0)
In [140]:
tensor_prod_inside_sum_05.tensor_prod_factoring(1)
, ,  ⊢  
In [141]:
tensor_prod_inside_sum_06.tensor_prod_factoring(2)
, , ,  ⊢  
In [142]:
scalar_mult_example = ScalarMult(alpha,tensor_prod_inside_sum_06)
scalar_mult_example:
In [143]:
scalar_mult_example.inner_expr().operands[1].tensor_prod_factoring(2)
, , ,  ⊢  
In [144]:
scalar_mult_example_02 = VecSum(i, TensorProd(ScalarMult(beta, x), ScalarMult(beta, y)), domain=Interval(two, four))
scalar_mult_example_02:
In [145]:
scalar_mult_example_03 = VecSum(i, ScalarMult(gamma, ScalarMult(beta, TensorProd(x, fi, y))), domain=Interval(two, four))
scalar_mult_example_03:
In [146]:
scalar_mult_example_04 = VecSum(i, ScalarMult(gamma, ScalarMult(i, TensorProd(x, y))), domain=Interval(two, four))
scalar_mult_example_04:
In [147]:
from proveit.numbers import Add
scalar_mult_example_05 = VecSum(
    i,
    ScalarMult(gamma, ScalarMult(i, ScalarMult(beta, ScalarMult(Add(i, one), TensorProd(x, y))))),
    domain=Interval(two, four))
scalar_mult_example_05:
In [148]:
defaults.assumptions
In [149]:
from proveit.logic import InClass
from proveit.linear_algebra import VecSpaces
In [150]:
InClass(x, VecSpaces(Real))
In [151]:
defaults.assumptions + (InClass(fi, VecSpaces(Real)), InClass(x, VecSpaces(Real)))
In [152]:
# the scalar_factorization() takes out just a single scalar it finds first
# which is somewhat problematic, because it makes it difficult to then
# repeat the process because it produces a ScalarMult at the top level
scalar_mult_example_02_factored_01 = scalar_mult_example_02.summand.scalar_factorization()
scalar_mult_example_02_factored_01: , ,  ⊢  
In [153]:
scalar_mult_example_02.summand.simplification()
, ,  ⊢  
In [154]:
scalar_mult_example_02_factored_01
, ,  ⊢  
In [155]:
# we would then have to dig into the next level
# scalar_mult_example_02_factored_01.inner_expr().rhs.operands[1].factor_scalar()
In [156]:
scalar_mult_example_02_factored_01
, ,  ⊢  
In [157]:
# shallow simplify is working, pulling the scalars out front
# and eliminating nested ScalarMults
scalar_mult_example_02_factored_02 = scalar_mult_example_02_factored_01.inner_expr().rhs.operands[1].shallow_simplify()
scalar_mult_example_02_factored_02: , ,  ⊢  
In [158]:
# double_scaling_reduce() seems to work now
# scalar_mult_example_02_factored_02.inner_expr().rhs.double_scaling_reduce()
In [159]:
scalar_mult_example_02.summand.shallow_simplification(
    assumptions=defaults.assumptions + (InClass(fi, VecSpaces(Real)), InClass(x, VecSpaces(Real))))
, ,  ⊢  
In [160]:
simplification_01 = scalar_mult_example_02.inner_expr().summand.shallow_simplification()
simplification_01: , ,  ⊢  
In [161]:
scalar_mult_example_02.tensor_prod_factoring(idx=1)
, ,  ⊢  
In [162]:
test_result = scalar_mult_example_02.factors_extraction()
test_result: , ,  ⊢  

Testing the instantiation of the distribution_over_vec_sum theorem.

In [163]:
from proveit.linear_algebra.scalar_multiplication import distribution_over_vec_sum
distribution_over_vec_sum
In [164]:
temp_self = scalar_mult_example_02
temp_self:
In [165]:
from proveit import TransRelUpdater
expr = temp_self
eq = TransRelUpdater(expr)
eq.relation
In [166]:
expr = eq.update(expr.inner_expr().summand.shallow_simplification())
eq.relation
, ,  ⊢  
In [167]:
expr
In [168]:
# this might be tricky — will known_vec_space return the same for every
# a_1, … a_i ?
_V_sub = VecSpaces.known_vec_space(expr.summand.scaled, field=None)
_V_sub:
In [169]:
_K_sub = VecSpaces.known_field(_V_sub)
_K_sub:
In [170]:
_b_sub = expr.indices
_b_sub:
In [171]:
_j_sub = _b_sub.num_elements()
_j_sub:
In [172]:
# from proveit import Lambda
_f_sub = Lambda(expr.indices, expr.summand.scaled)
_f_sub:
In [173]:
_Q_sub = Lambda(expr.indices, expr.condition)
_Q_sub:
In [174]:
_k_sub = expr.summand.scalar
_k_sub:
In [175]:
from proveit import V, K, a, b, f, j, k, Q
imp = distribution_over_vec_sum.instantiate(
    {V: _V_sub, K: _K_sub, b: _b_sub, j: _j_sub,
     f: _f_sub, Q: _Q_sub, k: _k_sub}, auto_simplify=False)
imp:  ⊢  
In [176]:
imp.derive_consequent().derive_reversed()
, ,  ⊢  
In [177]:
expr = eq.update(imp.derive_consequent().derive_reversed())
expr:
In [178]:
eq.relation
, ,  ⊢  
In [179]:
expr
In [180]:
expr.inner_expr().scaled.tensor_prod_factoring(1)
In [181]:
expr = eq.update(expr.inner_expr().scaled.tensor_prod_factoring(1))
expr:
In [182]:
eq.relation
, ,  ⊢  
In [183]:
scalar_mult_example_03
In [184]:
# scalar_mult_example_03.factors_extraction()
In [185]:
# consider this example now, noting the factor of index var 'i'
scalar_mult_example_04
In [186]:
# this seems to work ok when called literally like this:
scalar_mult_example_04.inner_expr().summand.shallow_simplification()
, ,  ⊢  
In [187]:
# but what if we try to use the TransRelUpdater?
expr = scalar_mult_example_04 # a VecSum with a ScalarMult summand
eq = TransRelUpdater(scalar_mult_example_04)
display(eq.relation)
expr = eq.update(expr.inner_expr().summand.shallow_simplification())
display(eq.relation)
In [188]:
# this will not yet work, because the i factor causes it to
# completely fail; want to check if the ScalarMult.scalar is itself a Mult,
# then check and pull out what we can from that
# scalar_mult_example_04.factors_extraction(assumptions=defaults.assumptions + (InSet(i, Real),))

(1) If the summand is a ScalarMult, we then check to see if the ScalarMult.scalar involves the summand index.

(2) If not, factor it out! If it does, we might have to look more carefully …

WORKING HERE

In [189]:
scalar_mult_example
In [190]:
scalar_mult_example_01 = VecSum(i, ScalarMult(gamma, TensorProd(x, y, fi, z)), domain=Interval(two, four))
scalar_mult_example_01:

Here trying to manually re-create the VecSum.deduce_in_vec_space() for when the summand is a ScalarMult of a TensorProd

Figuring out the instantiation substitutions needed for the VecSum of a ScalarMult … and figuring out some extra steps to deal with the vec space issues …

In [191]:
# this process works for a single vector left behind in the sum
# but not when we associate and then try
expr = scalar_mult_example_01
field = None
idx = 2 # just a single idx right now
idx_beg = 1
idx_end = 2
from proveit import TransRelUpdater
eq = TransRelUpdater(expr)
eq.relation
In [192]:
# associate the chosen elements to remain
if idx_beg != idx_end:
    expr = eq.update(expr.inner_expr().summand.scaled.association(
                idx_beg, idx_end - idx_beg + 1, auto_simplify=False))
    idx = idx_beg
display(expr)
display(idx)
1

The following cell seems important, and may need to be implemented in some way in the VecSum.tensor_prod_factoring() method for certain circumstances. Need to talk with WW about this though. This VecSpaces.known_vec_space() succeeds where the one in the next cell was continuing to be problematic.

In [193]:
# Now trying process without this and just the one above
####### put in some extra steps to help with the ########
####### vector field stuff?
# This is based on analogous stuff in VecSum.deduce_in_vec_space()
# with some modifications for ScalarMult summand
self = expr
with defaults.temporary() as tmp_defaults:
    tmp_defaults.assumptions = (defaults.assumptions + self.conditions.entries)
    vec_space = VecSpaces.known_vec_space(self.summand, field=field) #or on scaled?
    _V_sub = VecSpaces.known_vec_space(self.summand, field=field)
    display(vec_space)
In [194]:
# _V_sub = VecSpaces.known_vec_space(expr, field=field)
# display(expr)
# _V_sub = VecSpaces.known_vec_space(expr.summand)
# VecSpaces.known_vec_space(expr, field=Real)
In [195]:
_K_sub = VecSpaces.known_field(_V_sub)
_K_sub:
In [196]:
_b_sub = expr.indices
_b_sub:
In [197]:
_j_sub = _b_sub.num_elements()
_j_sub:
In [198]:
_Q_sub = Lambda(expr.indices, expr.condition)
_Q_sub:
In [199]:
_a_sub = expr.summand.scaled.operands[:idx]
_a_sub:
In [200]:
_c_sub = expr.summand.scaled.operands[idx+1:]
_c_sub:
In [201]:
_s_sub = Lambda(expr.indices, expr.summand.scalar)
_s_sub:
In [202]:
_f_sub = Lambda(expr.indices, expr.summand.scaled.operands[idx] )
_f_sub:
In [203]:
_i_sub = _a_sub.num_elements()
_i_sub:
In [204]:
_k_sub = _c_sub.num_elements()
_k_sub:
In [205]:
from proveit.linear_algebra.tensors import tensor_prod_distribution_over_summation_with_scalar_mult
tensor_prod_distribution_over_summation_with_scalar_mult
In [206]:
from proveit import K, f, Q, i, j, k, V, a, b, c, s
impl = tensor_prod_distribution_over_summation_with_scalar_mult.instantiate(
        {K:_K_sub, f:_f_sub, Q:_Q_sub, i:_i_sub, j:_j_sub,
                     k:_k_sub, V:_V_sub, a:_a_sub, b:_b_sub, c:_c_sub,
                     s: _s_sub}, auto_simplify=False)
impl:  ⊢  
In [207]:
impl.derive_consequent().derive_reversed()
, , , ,  ⊢  

Examples of tensor_prod_factoring()

In [208]:
scalar_mult_example_01.tensor_prod_factoring(idx=2)
, , , ,  ⊢  
In [209]:
scalar_mult_example_01.tensor_prod_factoring(idx_beg=1, idx_end=2)
, , , ,  ⊢  
In [210]:
scalar_mult_example_01.tensor_prod_factoring(idx_beg=1, idx_end=3)
, , , ,  ⊢  
In [211]:
scalar_mult_example_01.tensor_prod_factoring(idx_beg=0, idx_end=2)
, , , ,  ⊢  
In [212]:
scalar_mult_example_02.tensor_prod_factoring(idx=0)
, ,  ⊢  
In [213]:
scalar_mult_example_02.tensor_prod_factoring(idx=1)
, ,  ⊢  
In [214]:
# with no arguments, we factor as much as possible and reduce if possible
scalar_mult_example_02.tensor_prod_factoring()
, ,  ⊢  
In [215]:
scalar_mult_example_03
In [216]:
# the tensor_prod_factoring() will also work
# if the ScalarMults are nested (at least to some degree)
scalar_mult_example_03.tensor_prod_factoring(idx=1)
, , , ,  ⊢  
In [217]:
scalar_mult_example_03.inner_expr().tensor_prod_factoring(idx_beg=1, idx_end=2)
, , , ,  ⊢  
In [218]:
type(scalar_mult_example_03.summand.scaled.scaled.operands.num_elements())
proveit.numbers.numerals.numeral.Numeral
In [219]:
temp_result = scalar_mult_example_03.inner_expr().factors_extraction(
     assumptions = defaults.assumptions + scalar_mult_example_03.conditions.entries)
temp_result: , , , ,  ⊢  

Testing TensorProd.association() for tensor products of sets

In [220]:
# Some example TensorProds of CartExps
example_vec_space_01, example_vec_space_02, example_vec_space_03, example_vec_space_04, example_vec_space_05 = (
    TensorProd(R3, R3, R3, R3), TensorProd(R3, TensorProd(R3, R3)), TensorProd(x, R3),
    TensorProd(TensorProd(C3, TensorProd(C3, C3)), C3, C3),
    TensorProd(TensorProd(R3, TensorProd(C3, y)), R3, C3))
example_vec_space_01:
example_vec_space_02:
example_vec_space_03:
example_vec_space_04:
example_vec_space_05:
In [221]:
example_vec_space_01_assoc = example_vec_space_01.association(1, 2)
example_vec_space_01_assoc:  ⊢  
In [222]:
# For this to work, we need to establish ahead of time that one of the
# operands is indeed a vector space
example_vec_space_04.operands[0].deduce_as_vec_space()
example_vec_space_04_assoc = example_vec_space_04.association(1, 2)
example_vec_space_04_assoc:  ⊢  
In [223]:
# This will not work, because we have a mixture of vectors and CartExps.
# The error message comes about because Prove-It is trying to treat R^3
# as a vector instead of a vector space
try:
    example_vec_space_05_assoc = example_vec_space_05.association(1, 2)
except Exception as the_exception:
    print("Exception: {}".format(the_exception))
Exception: Prerequisites not met while assuming (u in Real^{3}, v in Real^{3}, w in Real^{3}, x in Real^{3}, y in Real^{3}, z in Real^{3}, x != 0(Real^{3}), z != 0(Real^{3}), a in Real, alpha in Complex, beta in Real, gamma in Real, delta in Real, forall_{i in Natural} (f(i) in Real^{3})): Real^{3} is not known to be in a vector space over Real

Testing TensorProd.disassociation() for tensor products of sets

Notice that TensorProd.disassociation() will produce a complete disassociation if auto_simplify() is allowed to proceed with no modifications, so here we first change the simplification direction 'ungroup' to False (and then reset back to True after this section).

In [224]:
TensorProd.change_simplification_directives(ungroup=False)
In [225]:
example_vec_space_01_assoc.rhs.disassociation(1)
In [226]:
example_vec_space_04_assoc.rhs.disassociation(1)
In [227]:
example_vec_space_04_assoc.rhs.disassociation(0)
In [228]:
example_vec_space_04_assoc.rhs.inner_expr().operands[0].disassociation(1)
In [229]:
TensorProd(TensorProd(x, TensorProd(y, z)), TensorProd(x, y)).disassociation(1)
, ,  ⊢  
In [230]:
# Reset to allow default auto_simplify() results
# (and notice what then happens in the next cell)
TensorProd.change_simplification_directives(ungroup=True)
In [231]:
# Now the disassociation() produces a complete disassociation
# despite our intention to only disassociate a piece
TensorProd(TensorProd(x, TensorProd(y, z)), TensorProd(x, y)).disassociation(1)
, ,  ⊢  

Examples and Testing of VecSum.factors_extraction()

In [232]:
# here the collection of scalars includes an index-dependent factor
scalar_mult_example_04.factors_extraction()
, ,  ⊢  
In [233]:
defaults.assumptions
In [234]:
# sometimes this works, and sometimes it doesn't
# how can it be non-deterministic? Is there something non-deterministic about the
# vector_spaces derivation?
# Here, all the scalar factors can be extracted:
display(scalar_mult_example_03)
scalar_mult_example_03.factors_extraction(assumptions = defaults.assumptions + scalar_mult_example_03.conditions.entries)
, , , ,  ⊢  
In [235]:
# In this case, one factor can be extracted, another cannot:
display(scalar_mult_example_04)
scalar_mult_example_04.factors_extraction()
, ,  ⊢  
In [236]:
# None of the factors are index-dependent
scalar_mult_example_02.factors_extraction()
, ,  ⊢  
In [237]:
# Just one vector of several is index-dependent
# while none of the scalars are index-dependent
display(scalar_mult_example_03)
scalar_mult_example_03.factors_extraction()
, , , ,  ⊢  
In [238]:
# If the scalar in ScalarMult is a single item?
display(scalar_mult_example_01)
scalar_mult_example_01.factors_extraction()
, , , ,  ⊢  
In [239]:
# If some scalars are index-dependent
# while none of the vectors themselves are?
display(scalar_mult_example_04)
scalar_mult_example_04.factors_extraction()
, ,  ⊢  
In [240]:
# multiple factors that can and cannot be extracted
display(scalar_mult_example_05)
scalar_mult_example_05.factors_extraction(field=None)
, , ,  ⊢  

Tues 12/21 – Wed 12/22: Testing the Add.factorization() method

In [241]:
from proveit.numbers import Mult
add_01 = Add(Mult(alpha, beta), Mult(beta, gamma), Mult(alpha, Mult(gamma, beta)))
add_01:
In [242]:
# not surprisingly, the Add.factorization() method doesn't work
# if the desired factor is buried too deeply in one of the expressions
display(add_01)
try:
    add_01.factorization(beta)
except ValueError as the_exception:
    print("ValueError: {}".format(the_exception))
In [243]:
add_02 = Add(Add(Mult(alpha, beta), beta), Mult(beta, gamma))
add_02:
In [244]:
add_02.inner_expr().operands[0].factorization(beta)

Testing VecAdd.factorization() method

In [245]:
vec_add_example_01 = VecAdd(ScalarMult(Mult(alpha, beta), x), ScalarMult(Mult(gamma, alpha), y))
vec_add_example_01:
In [246]:
defaults.assumptions
In [247]:
ScalarMult(alpha, ScalarMult(beta, x)).shallow_simplification()
, ,  ⊢  
In [248]:
vec_add_example_01
In [249]:
vec_add_example_01.factorization(alpha, pull='left', field=Complex)
, , , ,  ⊢  

To Consider for Later Development: What if the operand(s) in a TensorProd consist of an ExprRange?

The TensorProd.remove_vec_on_both_sides_of_equals() and TensorProd.insert_vec_on_both_sides_of_equals() methods

In [250]:
help(TensorProd.remove_vec_on_both_sides_of_equals)
Help on function remove_vec_on_both_sides_of_equals in module proveit.linear_algebra.tensors.tensor_prod:

remove_vec_on_both_sides_of_equals(tensor_equality, idx, rhs_idx=None, *, field=None, **defaults_config)
    From an equality with tensor products of vectors on
    both sides, derive a similar equality but with the vector 
    operand removed at the particular given zero-based index (idx).
    A different index may be specified for the right side as the 
    left side by setting rhs_idx (i.e., if entries don't line up 
    due to differences of ExprRange entries), but the default will
    be to use the same.
    
    Keyword arguments are accepted for temporarily changing any
    of the attributes of proveit.defaults.

In [251]:
help(TensorProd.insert_vec_on_both_sides_of_equals)
Help on function insert_vec_on_both_sides_of_equals in module proveit.linear_algebra.tensors.tensor_prod:

insert_vec_on_both_sides_of_equals(tensor_equality, idx, vec, rhs_idx=None, *, field=None, **defaults_config)
    From an equality with tensor products of vectors on
    both sides, derive a similar equality but with a vector 
    operand inserted at the particular given zero-based index (idx).
    A different index may be specified for the right side as the 
    left side by setting rhs_idx (i.e., if entries don't line up 
    due to differences of ExprRange entries), but the default will
    be to use the same.
    
    Keyword arguments are accepted for temporarily changing any
    of the attributes of proveit.defaults.

In [252]:
tp_01, tp_02 = (TensorProd(x, y, z), TensorProd(x, u, z))
tp_01:
tp_02:
In [253]:
equality_01 = Equals(tp_01, tp_02)
equality_01:
In [254]:
defaults.assumptions += (equality_01,)

We can access TensorProd.remove_vec_on_both_sides_of_equals() and TensorProd.insert_vec_on_both_sides_of_equals() via Equals.remove_vec_on_both_sides and Equals.insert_vec_on_both_sides respectively which are methods generated on-the-fly by finding methods with the _on_both_sides_of_equals suffix associated with the Equals expression. The docstrings are set to be the same:

In [255]:
help(equality_01.remove_vec_on_both_sides)
Help on function remove_vec_on_both_sides_of_equals in module proveit.linear_algebra.tensors.tensor_prod:

remove_vec_on_both_sides_of_equals(tensor_equality, idx, rhs_idx=None, *, field=None, **defaults_config)
    From an equality with tensor products of vectors on
    both sides, derive a similar equality but with the vector 
    operand removed at the particular given zero-based index (idx).
    A different index may be specified for the right side as the 
    left side by setting rhs_idx (i.e., if entries don't line up 
    due to differences of ExprRange entries), but the default will
    be to use the same.
    
    Keyword arguments are accepted for temporarily changing any
    of the attributes of proveit.defaults.

In [256]:
equality_02 = equality_01.remove_vec_on_both_sides(0)
equality_02: , , , , ,  ⊢  
In [257]:
equality_03 = equality_02.remove_vec_on_both_sides(1)
equality_03: , , , , , ,  ⊢  

The example above also demonstrates the auto-simplification of unary tensor products.

Now let's insert vectors into the tensor products. Starting with the special case where we don't start out with a tensor product, we must call the TensorProd.insert_vec_on_both_sides_of_equals method directly:

In [258]:
equality_04 = TensorProd.insert_vec_on_both_sides_of_equals(
    equality_03, 0, z)
equality_04: , , , , , ,  ⊢  

In the furture, we could have CartExp and other vector space expression classes implement left_tensor_prod_both_sides_of_equals (and right_tensor_prod_both_sides_of_equals) so then the above would be implemented via equality.left_tensor_prod_both_sides(z) insead.

Now that we have tensor products on both sides, we can call insert_vec_on_both_sides in the equality.

In [259]:
equality_05 = equality_04.insert_vec_on_both_sides(2, x)
equality_05: , , , , , ,  ⊢  

Operating on an ExprTuple whose single entry is an ExprRange that represents a finite list of elements, the ExprTuple.range_expansion() method converts self to an ExprTuple with a finite listing of explicit arguments.

For example, letting $\text{expr_tuple_01} = (x_1,\ldots,x_3)$ and then calling expr_tuple_01.range_expansion() deduces and returns

$\vdash ((x_1,\ldots,x_3) = (x_1, x_2, x_3))$

The reason for including this here in the demonstrations notebook for the linear_algebra package is that the method can be utilized to convert something like

$x_1 \otimes \ldots \otimes x_3$

to the more explicit

$x_1 \otimes x_2 \otimes x_3$

In [260]:
# create an example ExprRange
example_expr_range = ExprRange(i, IndexedVar(x, i), one, three)
example_expr_range:
In [261]:
# use the example ExprRange as the arg(s) for a TensorProd:
example_tensor_prod_over_range = TensorProd(example_expr_range)
example_tensor_prod_over_range:
In [262]:
# find the ExprTuple equivalent to the (ExprTuple-wrapped) ExprRange:
expr_range_eq_expr_tuple = ExprTuple(example_expr_range).range_expansion()
expr_range_eq_expr_tuple:  ⊢  
In [263]:
# find the equivalent explicit tensor prod
example_tensor_prod_over_range.inner_expr().operands.substitution(
         expr_range_eq_expr_tuple)
In [264]:
# find the equivalent explicit tensor prod,
# and omit the wrapping inherited from the underlying theorem
example_tensor_prod_over_range.inner_expr().operands.substitution(
         expr_range_eq_expr_tuple).with_wrapping_at()
In [265]:
%end demonstrations