In terms of property testing, the first question calls for a good analysis of a natural tester for linearity over set systems, where the distance is defined via the infinity-norm (rather than as the relative Hamming distance). The second question refers to the computational problem knows as local reconstruction (see, e.g., Testing and Reconstruction of Lipschitz Functions with Applications to Data Privacy).
I found the idea underlying the local reconstruction of a linear function over a set system very inspiring. Rather than defining the value of a set as the sum of its elements and locally reconstructing this value accordingly, the value of a set is defined as a linear combination of the values of the vectors in the Hadamard basis, where each basis vector is represented as a linear combination of two sets (i.e., the set of elements assign $1/sqrt(n)$ and the set of elements assign $-1/sqrt(n)$). The point is that the value of such a basis vector is a $2/sqrt(n)$ fraction of the value of the sets, and so the error (of $h$ on these vectors) is also cut by this factor. On the other hand, the Hadamard basis is orthonormal (just as the `standard' basis of unit vectors), and so expressing the values of sets in its terms does not involve larger coefficients.
Note by Moni (19-Mar-2018): A similar result/idea was used before by Cynthia Dwork and Sergey Yekhanin (see Section 3 of New Efficient Attacks on Statistical Disclosure Control Mechanisms, Crypto 2008).
Set functions with convenient properties (such as submodularity) appear in application areas of current interest, such as algorithmic game theory, and allow for improved optimization algorithms. It is natural to ask (e.g., in the context of data driven optimization) how robust such properties are, and whether small deviations from them can be tolerated. We consider two such questions in the important special case of linear set functions.
One question that we address is whether any set function that approximately satisfies the modularity equation (linear functions satisfy the modularity equation exactly) is close to a linear function. The answer to this is positive (in a precise formal sense) as shown by Kalton and Roberts [1983] (and further improved by Bondarenko, Prymak, and Radchenko [2013]). We revisit their proof idea that is based on expander graphs, and provide significantly stronger upper bounds by combining it with new techniques. Furthermore, we provide improved lower bounds for this problem.
Another question that we address is that of how to learn a linear function *h* that is close to an approximately linear function *f*, while querying the value of *f* on only a small number of sets. We present a deterministic algorithm that makes only linearly many (in the number of items) nonadaptive queries, by this improving over a previous algorithm of Chierichetti, Das, Dasgupta and Kumar [2015] that is randomized and makes more than a quadratic number of queries. Our learning algorithm is based on a Hadamard transform.