A. id3算法是什么
ID3算法是一种贪心算法,用来构造决策树。ID3算法起源于概念学习系统(CLS),以信息熵的下降速度为选取测试属性的标准,即在每个节点选取还尚未被用来划分的具有最高信息增益的属性作为划分标准,然后继续这个过程,直到生成的决策树能完美分类训练样例。
ID3算法的背景
ID3算法最早是由罗斯昆(J. Ross Quinlan)于1975年在悉尼大学提出的一种分类预测算法,算法的核心是“信息熵”。ID3算法通过计算每个属性的信息增益,认为信息增益高的是好属性,每次划分选取信息增益最高的属性为划分标准,重复这个过程,直至生成一个能完美分类训练样例的决策树。
B. 简述ID3算法基本原理和步骤
1.基本原理:
以信息增益/信息熵为度量,用于决策树结点的属性选择的标准,每次优先选取信息量最多(信息增益最大)的属性,即信息熵值最小的属性,以构造一颗熵值下降最快的决策树,到叶子节点处的熵值为0。(信息熵 无条件熵 条件熵 信息增益 请查找其他资料理解)
决策树将停止生长条件及叶子结点的类别取值:
①数据子集的每一条数据均已经归类到每一类,此时,叶子结点取当前样本类别值。
②数据子集类别仍有混乱,但已经找不到新的属性进行结点分解,此时,叶子结点按当前样本中少数服从多数的原则进行类别取值。
③数据子集为空,则按整个样本中少数服从多数的原则进行类别取值。
步骤:
理解了上述停止增长条件以及信息熵,步骤就很简单
C. 求助 weka 的ID3算法java源码
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/*
* Id3.java
* Copyright (C) 1999 University of Waikato, Hamilton, New Zealand
*
*/
package weka.classifiers.trees;
import weka.classifiers.Classifier;
import weka.classifiers.Sourcable;
import weka.core.Attribute;
import weka.core.Capabilities;
import weka.core.Instance;
import weka.core.Instances;
import weka.core.;
import weka.core.RevisionUtils;
import weka.core.TechnicalInformation;
import weka.core.TechnicalInformationHandler;
import weka.core.Utils;
import weka.core.Capabilities.Capability;
import weka.core.TechnicalInformation.Field;
import weka.core.TechnicalInformation.Type;
import java.util.Enumeration;
/**
<!-- globalinfo-start -->
* Class for constructing an unpruned decision tree based on the ID3 algorithm. Can only deal with nominal attributes. No missing values allowed. Empty leaves may result in unclassified instances. For more information see: <br/>
* <br/>
* R. Quinlan (1986). Inction of decision trees. Machine Learning. 1(1):81-106.
* <p/>
<!-- globalinfo-end -->
*
<!-- technical-bibtex-start -->
* BibTeX:
* <pre>
* @article{Quinlan1986,
* author = {R. Quinlan},
* journal = {Machine Learning},
* number = {1},
* pages = {81-106},
* title = {Inction of decision trees},
* volume = {1},
* year = {1986}
* }
* </pre>
* <p/>
<!-- technical-bibtex-end -->
*
<!-- options-start -->
* Valid options are: <p/>
*
* <pre> -D
* If set, classifier is run in debug mode and
* may output additional info to the console</pre>
*
<!-- options-end -->
*
* @author Eibe Frank ([email protected])
* @version $Revision: 6404 $
*/
public class Id3
extends Classifier
implements TechnicalInformationHandler, Sourcable {
/** for serialization */
static final long serialVersionUID = -2693678647096322561L;
/** The node's successors. */
private Id3[] m_Successors;
/** Attribute used for splitting. */
private Attribute m_Attribute;
/** Class value if node is leaf. */
private double m_ClassValue;
/** Class distribution if node is leaf. */
private double[] m_Distribution;
/** Class attribute of dataset. */
private Attribute m_ClassAttribute;
/**
* Returns a string describing the classifier.
* @return a description suitable for the GUI.
*/
public String globalInfo() {
return "Class for constructing an unpruned decision tree based on the ID3 "
+ "algorithm. Can only deal with nominal attributes. No missing values "
+ "allowed. Empty leaves may result in unclassified instances. For more "
+ "information see: "
+ getTechnicalInformation().toString();
}
/**
* Returns an instance of a TechnicalInformation object, containing
* detailed information about the technical background of this class,
* e.g., paper reference or book this class is based on.
*
* @return the technical information about this class
*/
public TechnicalInformation getTechnicalInformation() {
TechnicalInformation result;
result = new TechnicalInformation(Type.ARTICLE);
result.setValue(Field.AUTHOR, "R. Quinlan");
result.setValue(Field.YEAR, "1986");
result.setValue(Field.TITLE, "Inction of decision trees");
result.setValue(Field.JOURNAL, "Machine Learning");
result.setValue(Field.VOLUME, "1");
result.setValue(Field.NUMBER, "1");
result.setValue(Field.PAGES, "81-106");
return result;
}
/**
* Returns default capabilities of the classifier.
*
* @return the capabilities of this classifier
*/
public Capabilities getCapabilities() {
Capabilities result = super.getCapabilities();
result.disableAll();
// attributes
result.enable(Capability.NOMINAL_ATTRIBUTES);
// class
result.enable(Capability.NOMINAL_CLASS);
result.enable(Capability.MISSING_CLASS_VALUES);
// instances
result.setMinimumNumberInstances(0);
return result;
}
/**
* Builds Id3 decision tree classifier.
*
* @param data the training data
* @exception Exception if classifier can't be built successfully
*/
public void buildClassifier(Instances data) throws Exception {
// can classifier handle the data?
getCapabilities().testWithFail(data);
// remove instances with missing class
data = new Instances(data);
data.deleteWithMissingClass();
makeTree(data);
}
/**
* Method for building an Id3 tree.
*
* @param data the training data
* @exception Exception if decision tree can't be built successfully
*/
private void makeTree(Instances data) throws Exception {
// Check if no instances have reached this node.
if (data.numInstances() == 0) {
m_Attribute = null;
m_ClassValue = Instance.missingValue();
m_Distribution = new double[data.numClasses()];
return;
}
// Compute attribute with maximum information gain.
double[] infoGains = new double[data.numAttributes()];
Enumeration attEnum = data.enumerateAttributes();
while (attEnum.hasMoreElements()) {
Attribute att = (Attribute) attEnum.nextElement();
infoGains[att.index()] = computeInfoGain(data, att);
}
m_Attribute = data.attribute(Utils.maxIndex(infoGains));
// Make leaf if information gain is zero.
// Otherwise create successors.
if (Utils.eq(infoGains[m_Attribute.index()], 0)) {
m_Attribute = null;
m_Distribution = new double[data.numClasses()];
Enumeration instEnum = data.enumerateInstances();
while (instEnum.hasMoreElements()) {
Instance inst = (Instance) instEnum.nextElement();
m_Distribution[(int) inst.classValue()]++;
}
Utils.normalize(m_Distribution);
m_ClassValue = Utils.maxIndex(m_Distribution);
m_ClassAttribute = data.classAttribute();
} else {
Instances[] splitData = splitData(data, m_Attribute);
m_Successors = new Id3[m_Attribute.numValues()];
for (int j = 0; j < m_Attribute.numValues(); j++) {
m_Successors[j] = new Id3();
m_Successors[j].makeTree(splitData[j]);
}
}
}
/**
* Classifies a given test instance using the decision tree.
*
* @param instance the instance to be classified
* @return the classification
* @throws if instance has missing values
*/
public double classifyInstance(Instance instance)
throws {
if (instance.hasMissingValue()) {
throw new ("Id3: no missing values, "
+ "please.");
}
if (m_Attribute == null) {
return m_ClassValue;
} else {
return m_Successors[(int) instance.value(m_Attribute)].
classifyInstance(instance);
}
}
/**
* Computes class distribution for instance using decision tree.
*
* @param instance the instance for which distribution is to be computed
* @return the class distribution for the given instance
* @throws if instance has missing values
*/
public double[] distributionForInstance(Instance instance)
throws {
if (instance.hasMissingValue()) {
throw new ("Id3: no missing values, "
+ "please.");
}
if (m_Attribute == null) {
return m_Distribution;
} else {
return m_Successors[(int) instance.value(m_Attribute)].
distributionForInstance(instance);
}
}
/**
* Prints the decision tree using the private toString method from below.
*
* @return a textual description of the classifier
*/
public String toString() {
if ((m_Distribution == null) && (m_Successors == null)) {
return "Id3: No model built yet.";
}
return "Id3 " + toString(0);
}
/**
* Computes information gain for an attribute.
*
* @param data the data for which info gain is to be computed
* @param att the attribute
* @return the information gain for the given attribute and data
* @throws Exception if computation fails
*/
private double computeInfoGain(Instances data, Attribute att)
throws Exception {
double infoGain = computeEntropy(data);
Instances[] splitData = splitData(data, att);
for (int j = 0; j < att.numValues(); j++) {
if (splitData[j].numInstances() > 0) {
infoGain -= ((double) splitData[j].numInstances() /
(double) data.numInstances()) *
computeEntropy(splitData[j]);
}
}
return infoGain;
}
/**
* Computes the entropy of a dataset.
*
* @param data the data for which entropy is to be computed
* @return the entropy of the data's class distribution
* @throws Exception if computation fails
*/
private double computeEntropy(Instances data) throws Exception {
double [] classCounts = new double[data.numClasses()];
Enumeration instEnum = data.enumerateInstances();
while (instEnum.hasMoreElements()) {
Instance inst = (Instance) instEnum.nextElement();
classCounts[(int) inst.classValue()]++;
}
double entropy = 0;
for (int j = 0; j < data.numClasses(); j++) {
if (classCounts[j] > 0) {
entropy -= classCounts[j] * Utils.log2(classCounts[j]);
}
}
entropy /= (double) data.numInstances();
return entropy + Utils.log2(data.numInstances());
}
/**
* Splits a dataset according to the values of a nominal attribute.
*
* @param data the data which is to be split
* @param att the attribute to be used for splitting
* @return the sets of instances proced by the split
*/
private Instances[] splitData(Instances data, Attribute att) {
Instances[] splitData = new Instances[att.numValues()];
for (int j = 0; j < att.numValues(); j++) {
splitData[j] = new Instances(data, data.numInstances());
}
Enumeration instEnum = data.enumerateInstances();
while (instEnum.hasMoreElements()) {
Instance inst = (Instance) instEnum.nextElement();
splitData[(int) inst.value(att)].add(inst);
}
for (int i = 0; i < splitData.length; i++) {
splitData[i].compactify();
}
return splitData;
}
/**
* Outputs a tree at a certain level.
*
* @param level the level at which the tree is to be printed
* @return the tree as string at the given level
*/
private String toString(int level) {
StringBuffer text = new StringBuffer();
if (m_Attribute == null) {
if (Instance.isMissingValue(m_ClassValue)) {
text.append(": null");
} else {
text.append(": " + m_ClassAttribute.value((int) m_ClassValue));
}
} else {
for (int j = 0; j < m_Attribute.numValues(); j++) {
text.append(" ");
for (int i = 0; i < level; i++) {
text.append("| ");
}
text.append(m_Attribute.name() + " = " + m_Attribute.value(j));
text.append(m_Successors[j].toString(level + 1));
}
}
return text.toString();
}
/**
* Adds this tree recursively to the buffer.
*
* @param id the unqiue id for the method
* @param buffer the buffer to add the source code to
* @return the last ID being used
* @throws Exception if something goes wrong
*/
protected int toSource(int id, StringBuffer buffer) throws Exception {
int result;
int i;
int newID;
StringBuffer[] subBuffers;
buffer.append(" ");
buffer.append(" protected static double node" + id + "(Object[] i) { ");
// leaf?
if (m_Attribute == null) {
result = id;
if (Double.isNaN(m_ClassValue)) {
buffer.append(" return Double.NaN;");
} else {
buffer.append(" return " + m_ClassValue + ";");
}
if (m_ClassAttribute != null) {
buffer.append(" // " + m_ClassAttribute.value((int) m_ClassValue));
}
buffer.append(" ");
buffer.append(" } ");
} else {
buffer.append(" checkMissing(i, " + m_Attribute.index() + "); ");
buffer.append(" // " + m_Attribute.name() + " ");
// subtree calls
subBuffers = new StringBuffer[m_Attribute.numValues()];
newID = id;
for (i = 0; i < m_Attribute.numValues(); i++) {
newID++;
buffer.append(" ");
if (i > 0) {
buffer.append("else ");
}
buffer.append("if (((String) i[" + m_Attribute.index()
+ "]).equals("" + m_Attribute.value(i) + "")) ");
buffer.append(" return node" + newID + "(i); ");
subBuffers[i] = new StringBuffer();
newID = m_Successors[i].toSource(newID, subBuffers[i]);
}
buffer.append(" else ");
buffer.append(" throw new IllegalArgumentException("Value '" + i["
+ m_Attribute.index() + "] + "' is not allowed!"); ");
buffer.append(" } ");
// output subtree code
for (i = 0; i < m_Attribute.numValues(); i++) {
buffer.append(subBuffers[i].toString());
}
subBuffers = null;
result = newID;
}
return result;
}
/**
* Returns a string that describes the classifier as source. The
* classifier will be contained in a class with the given name (there may
* be auxiliary classes),
* and will contain a method with the signature:
* <pre><code>
* public static double classify(Object[] i);
* </code></pre>
* where the array <code>i</code> contains elements that are either
* Double, String, with missing values represented as null. The generated
* code is public domain and comes with no warranty. <br/>
* Note: works only if class attribute is the last attribute in the dataset.
*
* @param className the name that should be given to the source class.
* @return the object source described by a string
* @throws Exception if the source can't be computed
*/
public String toSource(String className) throws Exception {
StringBuffer result;
int id;
result = new StringBuffer();
result.append("class " + className + " { ");
result.append(" private static void checkMissing(Object[] i, int index) { ");
result.append(" if (i[index] == null) ");
result.append(" throw new IllegalArgumentException("Null values "
+ "are not allowed!"); ");
result.append(" } ");
result.append(" public static double classify(Object[] i) { ");
id = 0;
result.append(" return node" + id + "(i); ");
result.append(" } ");
toSource(id, result);
result.append("} ");
return result.toString();
}
/**
* Returns the revision string.
*
* @return the revision
*/
public String getRevision() {
return RevisionUtils.extract("$Revision: 6404 $");
}
/**
* Main method.
*
* @param args the options for the classifier
*/
public static void main(String[] args) {
runClassifier(new Id3(), args);
}
}
D. 5.10 决策树与ID3算法
https://blog.csdn.net/dorisi_h_n_q/article/details/82787295
决策树(decision tree)是一个树结构(可以是二叉树或非二叉树)。决策过程是从根节点开始,测试待分类项中相应的特征属性,并按照其值选择输出分支,直到到达叶子节点,将叶子节点存放的类别作为决策结果。
决策树的关键步骤是分裂属性。就是在某节点处按某一特征属性的不同划分构造不同的分支,目标是让各个分裂子集尽可能地“纯”。即让一个分裂子集中待分类项属于同一类别。
简而言之,决策树的划分原则就是:将无序的数据变得更加有序
分裂属性分为三种不同的情况 :
构造决策树的关键性内容是进行属性选择度量,属性选择度量(找一种计算方式来衡量怎么划分更划算)是一种选择分裂准则,它决定了拓扑结构及分裂点split_point的选择。
属性选择度量算法有很多,一般使用自顶向下递归分治法,并采用不回溯的贪心策略。这里介绍常用的ID3算法。
贪心算法(又称贪婪算法)是指,在对问题求解时,总是做出在当前看来是最好的选择。也就是说,不从整体最优上加以考虑,所做出的是在某种意义上的局部最优解。
此概念最早起源于物理学,是用来度量一个热力学系统的无序程度。
而在信息学里面,熵是对不确定性的度量。
在1948年,香农引入了信息熵,将其定义为离散随机事件出现的概率,一个系统越是有序,信息熵就越低,反之一个系统越是混乱,它的信息熵就越高。所以信息熵可以被认为是系统有序化程度的一个度量。
熵定义为信息的期望值,在明晰这个概念之前,我们必须知道信息的定义。如果待分类的事务可能划分在多个分类之中,则符号x的信息定义为:
在划分数据集之前之后信息发生的变化称为信息增益。
知道如何计算信息增益,就可计算每个特征值划分数据集获得的信息增益,获得信息增益最高的特征就是最好的选择。
条件熵 表示在已知随机变量的条件下随机变量的不确定性,随机变量X给定的条件下随机变量Y的条
件熵(conditional entropy) ,定义X给定条件下Y的条件概率分布的熵对X的数学期望:
根据上面公式,我们假设将训练集D按属性A进行划分,则A对D划分的期望信息为
则信息增益为如下两者的差值
ID3算法就是在每次需要分裂时,计算每个属性的增益率,然后选择增益率最大的属性进行分裂
步骤:1. 对当前样本集合,计算所有属性的信息增益;
是最原始的决策树分类算法,基本流程是,从一棵空数出发,不断的从决策表选取属性加入数的生长过程中,直到决策树可以满足分类要求为止。CLS算法存在的主要问题是在新增属性选取时有很大的随机性。ID3算法是对CLS算法的改进,主要是摒弃了属性选择的随机性。
基于ID3算法的改进,主要包括:使用信息增益比替换了信息增益下降度作为属性选择的标准;在决策树构造的同时进行剪枝操作;避免了树的过度拟合情况;可以对不完整属性和连续型数据进行处理;使用k交叉验证降低了计算复杂度;针对数据构成形式,提升了算法的普适性。
信息增益值的大小相对于训练数据集而言的,并没有绝对意义,在分类问题困难时,也就是说在训练数据集经验熵大的时候,信息增益值会偏大,反之信息增益值会偏小,使用信息增益比可以对这个问题进行校正,这是特征选择
的另一个标准。
特征对训练数据集的信息增益比定义为其信息增益gR( D,A) 与训练数据集的经验熵g(D,A)之比 :
gR(D,A) = g(D,A) / H(D)
sklearn的决策树模型就是一个CART树。是一种二分递归分割技术,把当前样本划分为两个子样本,使得生成的每个非叶子节点都有两个分支,因此,CART算法生成的决策树是结构简洁的二叉树。
分类回归树算法(Classification and Regression Trees,简称CART算法)是一种基于二分递归分割技术的算法。该算法是将当前的样本集,分为两个样本子集,这样做就使得每一个非叶子节点最多只有两个分支。因此,使用CART
算法所建立的决策树是一棵二叉树,树的结构简单,与其它决策树算法相比,由该算法生成的决策树模型分类规则较少。
CART分类算法的基本思想是:对训练样本集进行递归划分自变量空间,并依次建立决策树模型,然后采用验证数据的方法进行树枝修剪,从而得到一颗符合要求的决策树分类模型。
CART分类算法和C4.5算法一样既可以处理离散型数据,也可以处理连续型数据。CART分类算法是根据基尼(gini)系
数来选择测试属性,gini系数的值越小,划分效果越好。设样本集合为T,则T的gini系数值可由下式计算:
CART算法优点:除了具有一般决策树的高准确性、高效性、模式简单等特点外,还具有一些自身的特点。
如,CART算法对目标变量和预测变量在概率分布上没有要求,这样就避免了因目标变量与预测变量概率分布的不同造成的结果;CART算法能够处理空缺值,这样就避免了因空缺值造成的偏差;CART算法能够处理孤立的叶子结点,这样可以避免因为数据集中与其它数据集具有不同的属性的数据对进一步分支产生影响;CART算法使用的是二元分支,能够充分地运用数据集中的全部数据,进而发现全部树的结构;比其它模型更容易理解,从模型中得到的规则能获得非常直观的解释。
CART算法缺点:CART算法是一种大容量样本集挖掘算法,当样本集比较小时不够稳定;要求被选择的属性只能产生两个子结点,当类别过多时,错误可能增加得比较快。
sklearn.tree.DecisionTreeClassifier
1.安装graphviz.msi , 一路next即可
ID3算法就是在每次需要分裂时,计算每个属性的增益率,然后选择增益率最大的属性进行分裂
按照好友密度划分的信息增益:
按照是否使用真实头像H划分的信息增益
**所以,按先按好友密度划分的信息增益比按真实头像划分的大。应先按好友密度划分。
E. ID3算法中关于java代码转化为c#代码的问题
public string maxClass(Dictionary<string, Int32> classes)
{
string maxC = "";
int max = -1;
foreach (KeyValuePair<string, Int32> tmpClass in classes)
{
if (tmpClass.Value > max)
{
max = tmpClass.Value;
maxC = tmpClass.Key;
}
}
return maxC;
}
F. 什么是ID3算法
ID3算法是由Quinlan首先提出的。该算法是以信息论为基础,以信息熵和信息增益度为衡量标准,从而实现对数据的归纳分类。以下是一些信息论的基本概念:
定义1:若存在n个相同概率的消息,则每个消息的概率p是1/n,一个消息传递的信息量为-Log2(1/n)
定义2:若有n个消息,其给定概率分布为P=(p1,p2…pn),则由该分布传递的信息量称为P的熵,记为
。
定义3:若一个记录集合T根据类别属性的值被分成互相独立的类C1C2..Ck,则识别T的一个元素所属哪个类所需要的信息量为Info(T)=I(p),其中P为C1C2…Ck的概率分布,即P=(|C1|/|T|,…..|Ck|/|T|)
定义4:若我们先根据非类别属性X的值将T分成集合T1,T2…Tn,则确定T中一个元素类的信息量可通过确定Ti的加权平均值来得到,即Info(Ti)的加权平均值为:
Info(X, T)=(i=1 to n 求和)((|Ti|/|T|)Info(Ti))
定义5:信息增益度是两个信息量之间的差值,其中一个信息量是需确定T的一个元素的信息量,另一个信息量是在已得到的属性X的值后需确定的T一个元素的信息量,信息增益度公式为:
Gain(X, T)=Info(T)-Info(X, T)
ID3算法计算每个属性的信息增益,并选取具有最高增益的属性作为给定集合的测试属性。对被选取的测试属性创建一个节点,并以该节点的属性标记,对该属性的每个值创建一个分支据此划分样本.
数据描述
所使用的样本数据有一定的要求,ID3是:
描述-属性-值相同的属性必须描述每个例子和有固定数量的价值观。
预定义类-实例的属性必须已经定义的,也就是说,他们不是学习的ID3。
离散类-类必须是尖锐的鲜明。连续类分解成模糊范畴(如金属被“努力,很困难的,灵活的,温柔的,很软”都是不可信的。
足够的例子——因为归纳概括用于(即不可查明)必须选择足够多的测试用例来区分有效模式并消除特殊巧合因素的影响。
属性选择
ID3决定哪些属性如何是最好的。一个统计特性,被称为信息增益,使用熵得到给定属性衡量培训例子带入目标类分开。信息增益最高的信息(信息是最有益的分类)被选择。为了明确增益,我们首先从信息论借用一个定义,叫做熵。每个属性都有一个熵。
G. 用java编写获取多媒体文件id3信息的Android代码
^$this->error="Nosuchfile"; if($exitonerror)$this->exitonerror(); } } functionexitonerror(){ echo($this->error); exit; } functionset_id3($title="",$author="",$album="",$year="",$comment="",$genre_id=0){ $this->error=false; $this->wfh=fopen($this->file,"a"); fseek($this->wfh,-128,SEEK_END); fwrite($this->wfh,pack("a3a30a30a30a4a30C1","TAG",$title,$author,$album,$year,$comment,$genre_id),128); fclose($this->wfh); } functionget_id3(){ $this->id3_parsed=true; fseek($this->fh,-128,SEEK_END); $line=fread($this->fh,10000); if(preg_match("/^TAG/",$line)){ $this->id3=unpack("a3tag/a30title/a30author/a30album/a4year/a30comment/C1genre_id",$line); $this->id3["genre"]=$this->id3_genres_array[$this->id3]["genre_id"]]; return(true); }else{ $this->error="noidv3tagfound"; return(false); } } //get_info()helpermethods functioncalculate_length($id3v2_tagsize=0){ $length=floor(($this->info["filesize"]-$id3v2_tagsize)/$this->info["bitrate"]*0.008); $min=floor($length/60); $min=strlen($min)==1?"0$min":$min; $sec=$length`; $sec=strlen($sec)==1?"0$sec":$sec; return("$min:$sec"); } functionget_info(){ // $this->get_id3v2header(); $second=$this->synchronize(); // echo("2ndbyte=$second<b>".decbin($second)."</b><br>"); $third=ord(fread($this->fh,1)); $fourth=ord(fread($this->fh,1)); $this->info["version_id"]=($second&16)>0?(($second&8)>0?1:2):(($second&8)>0?0:2.5); $this->info["version"]=$this->info_versions[$this->info]["version_id"]]; $this->info["layer_id"]=($second&4)>0?(($second&2)>0?1:2):(($second&2)>0?3:0); ; $this->info["layer"]=$this->info_layers[$this->info]["layer_id"]]; $this->info["protection"]=($second&1)>0?"noCRC":"CRC"; $this->info["bitrate"]=$this->info_bitrates[$this->info]["version_id"]][$this->info]["layer_id"]][($third&240)]; $this->info["sampling_rate"]=$this->info_sampling_rates[$this->info]["version_id"]][($third&12)];