Null-Pointer Assignment Partition Software

First some background ...


The macros are which expands to an implementation-defined null pointer constant; C11 §7.19 3

typically is an integer constant 0 or or the like. It may have a different implementation or type - It could be as strange as that may be.

might be type . It might be type or something else. The type of is not defined.


When the null pointer constant is cast to any pointer, is is a null pointer. An integer cast to a pointer is also a null pointer. A system could have many different (bit-wise) null pointers. They all compare equally to each other. They all compare unequally to any valid object/function. Recall this compare is done as pointers, not integers.

An integer constant expression with the value 0, or such an expression cast to type , is called a null pointer constant. If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. C11 §6.3.2.3 3


So after all that chapter and verse how to distinguish from ?

If the macro is defined as an - it is game over - there is no difference between and .

If is not an , then code can use to differentiate and . This does not help OP's "Any change made can only be made within the function itself." requirement as that function accepts an augment.

If is an that has a different bit-pattern than , then a simple can differentiate.

I suspect the whole reason for this exercise is to realize there is no portable method to distinguish from .

answered Jun 18 '16 at 7:04

you should be able to use show table extended partition to see if you can get info on it and not try to open anyone who is zero bytes. Like this:

scala> var sqlCmd="show table extended from mydb like 'mytable' partition (date_time_date='2017-01-01')"
sqlCmd: String = show table extended from mydb like 'mytable' partition (date_time_date='2017-01-01')

scala> var partitionsList=sqlContext.sql(sqlCmd).collectAsList
partitionsList: java.util.List[org.apache.spark.sql.Row] =
[[mydb,mytable,false,Partition Values: [date_time_date=2017-01-01]
Location: hdfs://mycluster/apps/hive/warehouse/mydb.db/mytable/date_time_date=2017-01-01
Serde Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
InputFormat: org.apache.hadoop.hive.ql.io.orc.OrcInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
Storage Properties: [serialization.format=1]
Partition Parameters: {rawDataSize=441433136, numFiles=1, transient_lastDdlTime=1513597358, totalSize=4897483, COLUMN_STATS_ACCURATE={"BASIC_STATS":"true"}, numRows=37825}
]]

Let me know if that works and you can avoid the 0 byter's with such or if you still get null pointer..

James

0 thoughts on “Null-Pointer Assignment Partition Software”

    -->

Leave a Comment

Your email address will not be published. Required fields are marked *