2015年2月5日星期四

转载:将win7电脑变身WiFi热点,让手机、笔记本共享上网

将win7电脑变身WiFi热点,让手机、笔记本共享上网 主机设置如下:
1、以管理员身份运行命令提示符:
2、启用并设定虚拟WiFi网卡:快捷键win+R→输入cmd→回车
运行命令:netsh wlan set hostednetwork mode=allow ssid=wuminPC key=wuminWiFi此命令有三个参数,
mode:是否启用虚拟WiFi网卡,改为disallow则为禁用。 
ssid:无线网名称,最好用英文(以wuminPC为例)。
开启成功后,网络连接中会多出一个网卡为“Microsoft Virtual WiFi Miniport Adapter”的无线连接2,为方便起见,将其重命名为虚拟WiFi。若没有,只需更新无线网卡驱动就OK了。
key:无线网密码,八个以上字符(以wuminWiFi为例)。 以上三个参数可以单独使用,例如只使用mode=disallow可以直接禁用虚拟Wifi网卡。 
3、设置Internet连接共享:
确定之后,提供共享的网卡图标旁会出现“共享的”字样,表示“宽带连接”已共享至“虚拟WiFi”。在“网络连接”窗口中,右键单击已连接到Internet的网络连接,选择“属性”→“共享”,勾上“允许其他······连接(N)”并选择“虚拟WiFi”。 
4、开启无线网络: 继续在命令提示符中运行:netsh wlan start hostednetwork
附:显示无线网络信息命令:netsh wlan show hostednetwork(将start改为stop即可关闭该无线网,以后开机后要启用该无线网只需再次运行此命令即可) 
至此,虚拟WiFi的红叉叉消失,WiFi基站已组建好,主机设置完毕。笔记本、带WiFi模块的手机等子机搜索到无线网络wuminPC,输入密码wuminWiFi,就能共享上网啦!
虚拟无线AP发射的WLAN是802.11g标准,带宽为54Mbps。

ubuntu13.10-64位系统安装ia32库

安装新立德软件包管理器:打开终端,输入以下命令:sudo apt-get install synaptic
打开新立德软件包管理器,选择“设置>软件库”
选择“其他软件 > 添加”
在APT行中输入 "deb http://archive.ubuntu.com/ubuntu/ raring main restricted universe multiverse"
选择确定退出新立德软件包管理器
在终端输入“sudo apt-get update”
然后在终端输入:
sudo apt-get install  glib-networking-common:i386
sudo apt-get install  glib-networking:i386
sudo apt-get install  gstreamer0.10-plugins-good:i386
sudo apt-get install ia32-libs-multiarch:i386
sudo apt-get install ia32-libs

Grads脚本在gedit中高亮设置

仿照NCL脚本在gedit中高亮设置,自己改了一个grads 脚本在gedit中的高亮设置插件,
可以到下面链接下载,http://pan.baidu.com/share/link? ... 8&uk=2634521274
使用方法与http://www.ncl.ucar.edu/Applications/editor.shtml相同,
这个插件目前只能高亮关键词和注释部分,如果有懂gtksourceview的同行,还请后面增加更多完善,并共享之

The Self-Describing Files (SDF) interface in GrADS version 1.7 beta

GrADS is a data visualization and analysis package from COLAthe Center for Ocean-Land-Atmosphere studies. My work on GrADS has been to add the ability to read netCDF and HDF-SDS files, which I call SDFs (Self-Describing Files). The primary definition of what is needed in a netCDF or HDF-SDS file to be compatible with SDF interface in GrADS version 1.7 beta is the netCDF conventions adopted by participants in the Cooperative Ocean Atmosphere Research Data Service. In addition, the UNIDATA Program Center's udunits package version 1.10 or later is needed.



For a description of the SDF interface inGrADS version 1.6, go here.
In the following discussion of an SDF variable that corresponds to "time" (in GrADS terms, a "T" variable), the variable name "time" is actually used, but this variable is not required to have that name. It is determined based on the units attribute value exclusively, as required by theCOARDS conventions.
The COARDS conventions imply that the time:units attribute value should contain an origin, and GrADS'SDF interface requires it. Put in terms of function calls in the Udunits package, the time:units attribute value must encounter success with the calls utScan, utIsTime, and utHasOrigin (these are the C names; inFORTRAN,the function call names are utdec, uttime, and utorigin).
 Additionally, a time:units attribute specifying "months" is not supported, because that unit has no fixed length. However, data that have a monthly time step are easily described using a "days since origin_date" form of the units attribute value. The time step is checked and if it's greater than 27 days (whether the units attribute specifies days or hours), monthly frequency is assumed.
Multiple horizontal grids (AKA staggered grids) are supported by theCOARDSconventions, but not by GrADSSDF interface at this time. The new XDF interface can be used to partially address this point. For information on GrADSXDF interface, an alternative fornetCDF/HDF-SDSfiles, follow this link.
Climatology files are difficult to use at this time. Work is under way to correct this in future releases.
NetCDF defines a coordinate variable as a one-dimensional variable of the same name as a dimension. What GrADS describes as anXcoordinate variable is recognized by values of the units attribute of:
        "degrees_east", "degree_east", "degrees_E", or "degree_E"
Y coordinate variables are recognized if they have a units attribute values of:
        "degrees_north", "degree_north", "degrees_N", or "degree_N"
Z coordinate variables are recognized if they have a units attribute value that is a unit of length that can be converted by the udunitspackage to "feet", or a unit of pressure that can be converted to "pascals", or a unit of temperature that can be converted to "degrees Kelvin", or have one of the following explicit values (case is not significant):
        "mb", "sigma_level", "level", "layer", "layers",
        "hybird_sigma_level", "degreesk", or "degrees_k"
Any non-coordinate variable with dimensions consisting of theX &Yand optionally T and/or Z dimensions discovered as described above are considered displayable by the SDF interface, and may be used in expressions.
The preceding describes, to some extent, what an SDF needs in order to be accessible via the sdfopen command. However, files which do not have conformant metadata can still be read via the xdfopen command.
The syntax for the sdfopen command consists of one required argument (the path to the SDF file), and two optional arguments:
sdfopen SDFpath [template #time_steps] The optional arguments are for using a time series of files as a single entity. The #time_steps is the sum of the count in all the files to be examined, not the count in any one file. The different files are automatically accessed as the "time" or "t" settings in GrADSare altered. For example, if one had daily uwnd data in uwnd.1989.ncanduwnd.1990.nc, one could enter: sdfopen /Data/uwnd.1989.nc uwnd.%y4.nc 730 Thereafter in the session, times from either data file can be accessed. The %y4 in the template indicates a four-digit year that can vary in the filenames that can be accessed.

转载:Conventions for the standardization of NetCDF files

Sponsored by the "Cooperative Ocean/Atmosphere Research Data Service ", a NOAA/university cooperative for the sharing and distribution of global atmospheric and oceanographic research data sets
10 Feb 1995 - version 1.0
13 Mar 1995 - minor editorial repairs
01 May 1995 - added links, convention note
University participants:
NOAA participants:
This standard is a set of conventions adopted in order to promote the interchange and sharing of files created with the netCDF Application Programmer Interface (API).  This standard is based upon version 2.3 of netCDF.  Documentation of the netCDF API may be found in the "NetCDF Users' Guide, Version 2.3, April 1993 available from URL  http://www.unidata.ucar.edu/packages/netcdf/ or via anonymous ftp at ftp.unidata.ucar.edu.  All conventions named in that document will be adhered to in this standard unless noted to the contrary.
This standard also refers to version 1.7.1 of the udunits standard supported by Unidata.  The udunits package is available via anonymous ftp at ftp.unidata.ucar.edu.  Included in the udunits package is a file, udunits.dat, which lists collections of unit names.  The names given therein and their plural forms will be regarded as acceptable unit names for this standard with the following additions and deletions:
  • "degrees" - deleted
  • "level", "layer", "sigma_level" - added
The unit "degrees" creates ambiguities when attempting to differentiate longitude and latitude coordinate variables; files must use "degrees_east" for units of longitude and "degrees_north" for units of latitude or the alternative "spellings" of those names listed in the sections on longitude and latitude coordinates below.  The dimensionless units "level", "layer", and "sigma_level" are sometimes needed when representing numerical model outputs.
The udunits package also supports linear transformation of all units through the syntax "scale_factor unit_name@offset", for example, "0.0005 degC@40000".  This syntax, however, is not supported by this standard.
These conventions have been registered with Unidata as the COARDS conventions and are available at ftp://ftp.unidata.ucar.edu/pub/netcdf/Conventions/COARDS

Conventions:
File Name:
NetCDF files should have the file name extension ".nc".
Coordinate Variables:
1-dimensional netCDF variables whose dimension names are identical to their variable names are regarded as "coordinate variables" (axes of the underlying grid structure of other variables defined on this dimension).
Global attributes:
Although not mandatory the attribute "history" is recommended to record the evolution of the data contained within a netCDF file. Applications which process netCDF data can append their information to the history attribute.
The optional attribute "Conventions" is recommended to reference the COARDS conventions, registered with Unidata, and available via ftp at
   directory:      pub/netcdf/Conventions/COARDS
   host:           ftp.unidata.ucar.edu
 The attribute has this value:
  :Conventions = "COARDS";
  // Cooperative Ocean/Atmosphere Research Data Service
  • long_name - a long descriptive name (title). This could be used for labelling plots, for example.  If a variable has no long_name attribute assigned, the variable name will be used as a default.
  • scale_factor - If present for a variable, the data are to be multiplied by this factor after the data are read by the application that accesses the data. (see further discussion under the add_offset attribute)
  • add_offset - If present for a variable, this number is to be added to the data after it is read by the application that accesses the data. If both scale_factor and add_offset attributes are present, the data are first scaled before the offset is added. The attributes scale_factor and add_offset can be used together to provide simple data compression to store low-resolution floating-point data as small integers in a netCDF file. When scaled data are written, the application should first subtract the offset and then divide by the scale factor.
    The NOAA cooperative standard is more restrictive than the netCDF Users Guide with respect to the use of the scale_factor and add_offset attributes; ambiguities and precision problems related to data type conversions are resolved by these restrictions.  If the scale_factor and add_offset attributes are of the same data type as the associated variable no restrictions apply; the unpacked data is assumed to be of the same data type as the packed data.  However, if the scale_factor and add_offset attributes are of a different data type than the associated variable (containing the packed data) then in files adhering to this standard the associated variable may only be of type byte, short, or long.  The attributes scale_factor and add_offset (which must match in data type) must be of type float or double.  The data type of the attributes should match the intended type of the unpacked data.  (It is not advised to unpack a long into a float as there is a potential precision loss.)
  • _FillValue - If a scalar attribute with this name is defined for a variable and is of the same type as the variable, it will be subsequently used as the fill value for that variable. The purpose of this attribute is to save the applications programmer the work of prefilling the data and also to eliminate the duplicate writes that result from netCDF filling in missing data with its default fill value, only to be immediately overwritten by the programmer's preferred value. This value is considered to be a special value that indicates missing data, and is returned when reading values that were not written. The missing value should be outside the range specified by valid_range (if used) for a variable. It is not necessary to define your own _FillValue attribute for a variable if the default fill value for the type of the variable is adequate.  
Units attribute:
A character array that specifies the units used for the variable's data.  Where possible the units attribute should be formatted as per the recommendations in the Unidata udunits package.
Other attributes:
A file will normally contain many attributes that are not standardized in this profile.  Those attributes do not represent a violation of this standard in any way.  Application programs should ignore attributes that they do not recognize.
Variable names:
Variable names should begin with a letter and be composed of letters, digits, and underscores.  It is recommended that variable names be case-insensitive implying that the same case-insensitive name should not be used for multiple variables within a single file.
Rectilinear coordinate systems, only:
The space/time locations of points within the netCDF variables should be the simple ordered tuples formed by associating values from their coordinate axes.  Thus, for example, curvilinear coordinate systems in which the coordinate locations must be inferred from other non-coordinate variables or from an equation are not standardized by this netCDF profile.
Number of dimensions:
All netCDF variables will be defined on either one, two ,three, or four dimensions (the nature of the data will dictate the natural encoding).  Where it makes sense single point locations should be encoded as coordinate variables, for example, the latitude and longitude positions of a vertical profile are natural candidates for single point latitude and longitude coordinate variables.
 If it is necessary to create a netCDF file with more than 4 dimensions it is recommended that the additional dimension(s) be added "to the left" of the space and time dimensions as represented in CDL.  For example
 float my_variable(param_value,time,height,lat,lon);  
would be the recommended representation of a fifth, parameter value, coordinate.
Coordinate variable names:
The names of coordinate variables are not standardized by these conventions (since data sets may in general contain multiple coordinate variables of the same orientation).  Coordinate variable names should follow the same general naming rules (above) as other netCDF variables.
Order of dimensions:
If any or all of the dimensions of a variable have the interpretations of "date or time" (a.k.a. "T"), "height or depth" (a.k.a. "Z"), "latitude" (a.k.a. "Y"), or "longitude" (a.k.a. "X") then those dimensions should appear in the relative order T, then Z, then Y, then X in the CDL definition corresponding to the file.
Data type:
The data type of coordinate and non-coordinate variables is unrestricted (byte, short, long, float, and double are all acceptable data types). Although not forbidden by this standard the data type "char", which is functionally identical to "byte", is not recommended as netCDF has reserved the option to modify its behavior in future versions.
Coordinate value ordering:
The coordinate values of a coordinate variable must be either monotonically increasing or monotonically decreasing.  However, the coordinate values need not be evenly spaced.  Missing values are not permitted in coordinate variables.
Coordinate Variable Attributes:
If a coordinate variable contains longitude, latitude, depth, elevation, date, or time values then the units attribute is mandatory; it is used to determine the orientation of the coordinate variable.  The long_name attribute is optional but may be used to enhance clarity and the self-describing nature of the netCDF file.  Since coordinate variables may not contain missing values the attributes _FillValue and missing_value may not be used with coordinate variables.
Time or date dimension:
Coordinate variables representing time must always explicitly include the units attribute;  there is no default value. A time coordinate variable will be identifiable by its units, alone. The units attribute will be of character type with the string formatted as per the recommendations in the Unidata udunits package version 1.7.1.  The following excerpt from the udunits documentation explains the time unit encoding by example:
 The specification:
      seconds since 1992-10-8 15:15:42.5 -6:00
 indicates seconds since October 8th, 1992 at 3 hours, 15 minutes and 42.5  seconds in the afternoon in the time zone which is six hours to the west of  Coordinated Universal Time (i.e. Mountain Daylight Time).  The time zone  specification can also be written without a colon using one or two-digits  (indicating hours) or three or four digits (indicating hours and minutes).  
The acceptable units for time are listed in the file udunits.dat.  The most commonly used of these strings (and their abbreviations) includes day (d), hour (hr, h), minute (min), second (sec, s), year (yr). Plural forms are also acceptable.  The date string may include date alone; date and time; or date, time, and time zone.
It is recommended that the unit "year" not be used as a unit of time. Year is an ambiguous unit as years are of varying length.  Udunits defines a year as exactly 365 days.
A time coordinate variable is identifiable from its units string, alone.  The udunits routines utScan and utIsTime can be used to make this determination.  (*Note that at the time of this writing the author of this draft profile had not tested these routines personally.)
Climatological time:
Coordinate variables representing climatological time (an axis of 12 months, 4 seasons, etc. that is located in no particular year) should be encoded like other time axes but with the added restriction that they be encoded to begin in the year 0000.  (Note - at the time of this writing this encoding has not been tested with the udunits package.)
Vertical (height or depth) dimension:
Coordinate variables representing height or depth must always explicitly include the units attribute; there is no default value for the units attribute. The units attribute will be of character type.
The acceptable units for vertical (depth or height) coordinate variables are
  • units of pressure as listed in the file udunits.dat.  For vertical axes the most commonly used of these include include bar, millibar, decibar, and atmosphere (atm).
  • units of length as listed in the file udunits.dat.  For vertical axes the most commonly used of these include meter (metre, m), centimeter (cm), decimeter (dm), and feet (ft).
  • the (dimensionless) units "level", "layer", or "sigma_level"
  • other units listed in the file udunits.dat that may under certain circumstances reference vertical position such as units of density or temperature.
Plural forms are also acceptable.
The direction of positive, whether up or down, cannot in all cases be inferred from the units.  The direction of positive is useful for applications displaying the data.  For this reason the new attribute positive is defined in this standard.  The inclusion of the positive attribute is required by this standard if the vertical axis units are not a valid unit of pressure (a determination which can be made using the udunits routine, utScan) -- otherwise its inclusion is optional.  The positive attribute may have the value "up" or "down" (case insensitive).
For example, if an oceanographic netCDF file encodes the depth of the surface as 0 and the depth of 1000 meters as 1000 then the axis would use attributes as follows:
axis_name:units="meters";
axis_name:positive="down";
If, on the other hand, the depth of 1000 meters were represented as -1000 then the value of the positive attribute would have been "up".  If the units attribute value is a valid pressure unit the default value of the positive attribute is "down".
A vertical coordinate variable will be identifiable by
  • units of pressure; or
  • the presence of the positive attribute with a value of "up" or "down" (case insensitive).
Latitude dimension:
Coordinate variables representing latitudes must always explicitly include the units attribute; there is no default value for the units attribute.  The units attribute will be of character type with the string formatted as per the recommendations in the Unidata udunits package.
The recommended unit of latitude is "degrees_north".  Also acceptable are "degree_north", "degree_N", and "degrees_N".
A latitude coordinate variable is identifiable from its units string, alone.  The udunits routines utScan can be used to make this determination.  (Note that at the time of this writing the author of this draft profile had not tested these routines personally.)
Longitude dimension:
Coordinate variables representing longitudes must always explicitly include the units attribute; there is no default value for the units attribute.  The units attribute will be of character type with the string formatted as per the recommendations in the Unidata udunits package.
The recommended unit of longitude is "degrees_east" (eastward positive).  Also acceptable are "degree_east", "degree_E", and "degrees_E".  The unit "degrees_west"(westward positive) is not recommended because it implies a negative conversion factor from degrees_east.
Longitudes may be represented modulo 360.  Thus, for example, -180, 180, and 540 are all valid representations of the International Dateline and 0 and 360 are both valid representations of the Prime Meridian. Note, however, that the sequence of numerical longitude values stored in the netCDF file must be monotonic in a non-modulo sense.
A longitude coordinate variable is identifiable from its units string, alone.  The udunits routines utScan can be used to make this determination.  (Note that at the time of this writing the author of this draft profile had not tested these routines personally.)

What type of calendar is used in the CM2.x experiments?

What type of calendar is used in the CM2.x experiments? A 365 day calendar is used. February always has 28 days and a year always has 365 days.    
(NOTE: This answer applies to all CM2.x   deccen experiments.)    
   
Coupled climate model experiments conducted at GFDL to explore deccen climate issues have traditionally used a 365 day calendar without leap years. Neglecting leap days (i.e., 29 February never occurs) is acceptable for     these experiments because no attempt is being made to replicate the weather conditions of any particular day. Also, by having a calendar in which a year is always 365 days in length, many analyses of the long model-produced time series become more straightforward, because the seasonal cycle is identical each and every year e.g., the location of the sun in the sky at 12Z on 7 October - or any other date and time - is exactly the same every year.)
   
That the CM2.x models use a 365 day calendar is indicated in the netCDF output file attributes as…
   
         time:calendar = "noleap" ; 
   
    The NetCDF Climate and Forecast (CF) Metadata Conventions provide for not one, but two ways to specify a 365 day repeating calendar. The two are "noleap" or "365_day". The CM2.x model output uses the "noleap" attribute setting. (A "common_year" also is considered to be 365 days.)
   
So, what does this mean for the end user? It depends upon what analysis package is being used.
                   
Analysis programs that have adopted the CF calendar conventions should recognize that CM2.x is using a noleap calendar and automatically adapt, without any user actions required.  [For example, the Ferret analysis program     correctly handles the CM2.x noleap calendar, requiring no user actions in order for the calendar to be properly displayed.]
                   
Some other programs are able to accommodate a noleap calendar, but only if the user intervenes. Such programs do not recognize the noleap netCDF attributes, and instead require the user to somehow specify the kind of calendar being used. [For example, the GrADS analysis program can adapt to the noleap calendar if one creates a partial descriptor file and uses the "xdfopen" command. For more information about dealing with the noleap calendar in GrADS, see the Tips on using NetCDF files with GrADS writeup at the bottom of this page, provided courtesy of Jennifer Adams of COLA.]
   
However, programs that neither recognize the noleap attribute, nor allow one to override the default calendar,     will have problems displaying the proper date if the program assumes anything other than a 365 day calendar.
  
How bad a mistake is made if one assumes the CM2.x models use a Gregorian or standard calendar (i.e., one with leap days) instead of a noleap calendar?  It depends on how far in time the point of interest is relative to the reference time the program uses in its calendar calculations. For example, consider the case where the     reference time is equivalent to the specification    
          time:units = "days since 1-01-01 00:00:00" ; 
    In other words, the calendar calculations use 1 January year 1 as  a starting point. If one looks at a January mean of year 451 produced  by a CM2.x experiment while using a program that does its calculations using a noleap 365 day calendar, the date will appear as 16 Jan 451.  But, a program that bases its calculations on a calendar that includes leap days will show the date as being sometime in September of year 450.  Over 451 years, more than 100 leap days would have accumulated, leading to a multi-month offset in the calendar calculation.    
                     
  1. Use the 'sdfopen' command -- This is the simplest because the users do no need to provide anything more than a file name. GrADS checks all the attribute metadata in the file to determine how to place the data into a lat/lon/lev/time grid context inside GrADS. For the time axis, this is done by reading the first two data values, determining a 'start' and 'increment' and calculating the remainder. Unfortunately, when the time axis units are (for example) 'hours since 1980-01-01 00:00:00" and the first two values are 2453544 and 2454216, the calculation of the dates associated with these two large integers is done using a gregorian calendar, not a 365-day NOLEAP calendar, even if that calendar attribute exists in the data file. So, the initial date and the increment may well be incorrect, and thus the time axis values will not be correct when this type of data file is opened with 'sdfopen'

Option #1 will not work reliably with 365-day NOLEAP data files. Option #3 will only work with GrADS 1.9. Option #2 is the easiest and quickest. However if you plan to be templating (or aggregating) several netcdf files together, then it is worth the effort to write a full descriptor file and use option #3 -- the use of 'xdfopen' with templated data sets is known to lead to memory leaks and eventual core dumps.                    

Building HDF5 and NETCDF with Intel compiler

http://software.intel.com/en-us/articles/performance-tools-for-software-developers-building-hdf5-with-intel-compilers

http://software.intel.com/en-us/articles/performance-tools-for-software-developers-building-netcdf-with-the-intel-compilers
==================================================================
http://www.cnblogs.com/panfeng412/archive/2011/10/20/library_path-and-ld_library_path.html
LIBRARY_PATH和LD_LIBRARY_PATH环境变量的区别
LIBRARY_PATH和LD_LIBRARY_PATH是Linux下的两个环境变量,二者的含义和作用分别如下:
LIBRARY_PATH环境变量用于在程序编译期间查找动态链接库时指定查找共享库的路径,例如,指定gcc编译需要用到的动态链接库的目录。设置方法如下(其中,LIBDIR1和LIBDIR2为两个库目录):
export LIBRARY_PATH=LIBDIR1:LIBDIR2:$LIBRARY_PATH
LD_LIBRARY_PATH环境变量用于在程序加载运行期间查找动态链接库时指定除了系统默认路径之外的其他路径,注意,LD_LIBRARY_PATH中指定的路径会在系统默认路径之前进行查找。设置方法如下(其中,LIBDIR1和LIBDIR2为两个库目录):
export LD_LIBRARY_PATH=LIBDIR1:LIBDIR2:$LD_LIBRARY_PATH
举个例子,我们开发一个程序,经常会需要使用某个或某些动态链接库,为了保证程序的可移植性,可以先将这些编译好的动态链接库放在自己指定的目录下,然后按照上述方式将这些目录加入到LD_LIBRARY_PATH环境变量中,这样自己的程序就可以动态链接后加载库文件运行了。
区别与使用:


开发时,设置LIBRARY_PATH,以便gcc能够找到编译时需要的动态链接库。
发布时,设置LD_LIBRARY_PATH,以便程序加载运行时能够自动找到需要的动态链接库。